id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
juancavallotti/multilingual-gec | 2023-01-06T18:59:59.000Z | [
"task_categories:translation",
"size_categories:100K<n<1M",
"language:en",
"language:es",
"language:fr",
"language:de",
"license:apache-2.0",
"grammar",
"gec",
"multi language",
"language detection",
"region:us"
] | juancavallotti | null | null | null | 2 | 81 | ---
author: Juan Alberto López Cavallotti
date: Jan 6, 2023
license: apache-2.0
task_categories:
- translation
language:
- en
- es
- fr
- de
tags:
- grammar
- gec
- multi language
- language detection
pretty_name: Multi Lingual Grammar Error Correction Dataset
size_categories:
- 100K<n<1M
---
# Dataset Card for Multilingual Grammar Error Correction
## Dataset Description
- **Homepage:** https://juancavallotti.com
- **Paper:** https://blog.juancavallotti.com/2023/01/06/training-a-multi-language-grammar-error-correction-system/
- **Point of Contact:** Juan Alberto López Cavallotti
### Dataset Summary
This dataset can be used to train a transformer model (we used T5) to correct grammar errors in simple sentences written in English, Spanish, French, or German.
This dataset was developed as a component for the [Squidigies](https://squidgies.app/) platform.
### Supported Tasks and Leaderboards
* **Grammar Error Correction:** By appending the prefix *fix grammar:* to the prrompt.
* **Language Detection:** By appending the prefix: *language:* to the prompt.
### Languages
* English
* Spanish
* French
* German
## Dataset Structure
### Data Instances
The dataset contains the following instances for each language:
* German 32282 sentences.
* English 51393 sentences.
* Spanish 67672 sentences.
* French 67157 sentences.
### Data Fields
* `lang`: The language of the sentence
* `sentence`: The original sentence.
* `modified`: The corrupted sentence.
* `transformation`: The primary transformation used by the synthetic data generator.
* `sec_transformation`: The secondary transformation (if any) used by the synthetic data generator.
### Data Splits
* `train`: There isn't a specific split defined. I recommend using 1k sentences sampled randomly from each language, combined with the SacreBleu metric.
## Dataset Creation
### Curation Rationale
This dataset was generated synthetically through code with the help of information of common grammar errors harvested throughout the internet.
### Source Data
#### Initial Data Collection and Normalization
The source grammatical sentences come from various open-source datasets, such as Tatoeba.
#### Who are the source language producers?
* Juan Alberto López Cavallotti
### Annotations
#### Annotation process
The annotation is automatic and produced by the generation script.
#### Who are the annotators?
* Data generation script by Juan Alberto López Cavallotti
### Other Known Limitations
The dataset doesn't cover all the possible grammar errors but serves as a starting point that generates fair results.
## Additional Information
### Dataset Curators
* Juan Alberto López Cavallotti
### Licensing Information
This dataset is distributed under the [Apache 2 License](https://www.apache.org/licenses/LICENSE-2.0)
### Citation Information
Please mention this original dataset and the author **Juan Alberto López Cavallotti**
### Contributions
* Juan Alberto López Cavallotti |
BelleGroup/generated_chat_0.4M | 2023-04-08T03:55:12.000Z | [
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:gpl-3.0",
"region:us"
] | BelleGroup | null | null | null | 38 | 81 | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 100K<n<1M
---
# Generated_Chat_0.4M
## 内容
包含约40万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的个性化角色对话数据,包含角色介绍。
注意:此数据集是由ChatGPT产生的,未经过严格校验,题目或解题过程可能包含错误。使用过程中请注意这一点。
## 样例
```
{
"instruction": "于以下角色信息完成一段对话\nAlice是一个年轻的律师,她正在为一个涉及濒危物种保护的案件做准备。Bob是一位当地的居民,他所在的社区经常受到该濒危物种的干扰。\n",
"input": "",
"output": "\nAlice: 你好Bob,我是濒危物种保护案的律师,很高兴见到你。我们前几天发给你的文件你看到了吗?\nBob: 看到了,你们说要禁止我们在该物种的常住地采集蘑菇,这很不公平。\nAlice: 保护濒危物种对整个生态系统都很重要,如果我们的举动损害了这个物种的生存,那么整个生态系统都将遭受损失。\nBob: 我理解您的立场,但是我们一直以来都依靠这个物种来维持我们的经济生活,现在我们要怎么办?\nAlice: 我理解这对当地社区的影响,但是现在我们必须找到一种既能保护这个物种,也同时使您们的经济生活不受影响的解决方案。你有任何想法吗?我们可以一起来想办法。\n"
}
```
### 字段
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## 局限性和使用限制
我们要求开发者仅将我们开源的代码、数据、模型及后续衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
由于数据是由*ChatGPT*生成的,未经严格验证,在事实性和其他方面还存在一些不足。因此,在使用此数据集时,请务必注意甄别。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目的开发者不承担任何责任。
***
# Generated_Chat_0.4M
## Contents
Includes approx. 400k Personalized Character Dialogue generated by BELLE, with character Introduction.
Note: this subset was generated by *ChatGPT* and was not strictly verified. The quizzes or answers might contain errors. Please take this in mind when using this subset.
## Sample
```
{
"instruction": "于以下角色信息完成一段对话\nAlice是一个年轻的律师,她正在为一个涉及濒危物种保护的案件做准备。Bob是一位当地的居民,他所在的社区经常受到该濒危物种的干扰。\n",
"input": "",
"output": "\nAlice: 你好Bob,我是濒危物种保护案的律师,很高兴见到你。我们前几天发给你的文件你看到了吗?\nBob: 看到了,你们说要禁止我们在该物种的常住地采集蘑菇,这很不公平。\nAlice: 保护濒危物种对整个生态系统都很重要,如果我们的举动损害了这个物种的生存,那么整个生态系统都将遭受损失。\nBob: 我理解您的立场,但是我们一直以来都依靠这个物种来维持我们的经济生活,现在我们要怎么办?\nAlice: 我理解这对当地社区的影响,但是现在我们必须找到一种既能保护这个物种,也同时使您们的经济生活不受影响的解决方案。你有任何想法吗?我们可以一起来想办法。\n"
}
```
### Schema
```
instruction: 指令
input: 输入(本数据集均为空)
output: 输出
```
## Limitation and Usage Limits
We require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed.
Since this dataset was generated by *ChatGPT* and was not strictly verified, it still has shortcomings regarding factuality and other aspects. When using this dataset, careful inspection is needed.
This dataset does not represent anyone's ground, interest or thought, and is not related to any kind of claim of any groups. The developers of this project do not assume any responsibility to potential harm inflicted by using this dataset and project. |
mstz/wine_origin | 2023-04-16T18:06:09.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"wine_origin",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_wine_origin_database_generator_(version_2)_108,
author = {Breiman,L. & Stone,C.J.},
title = {{Waveform Database Generator (Version 2)}},
year = {1988},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C56014}}
} | null | 0 | 81 | ---
language:
- en
tags:
- wine_origin
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Wine Origin
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- wine_origin
- wine_origin_0
- wine_origin_1
- wine_origin_2
license: cc
---
# Wine Origin
The [Wine Origin dataset](https://archive-beta.ics.uci.edu/dataset/109/wine) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| wine_origin | Multiclass classification.| |
| wine_origin_0 | Binary classification. | Is the instance of class 0? |
| wine_origin_1 | Binary classification. | Is the instance of class 1? |
| wine_origin_2 | Binary classification. | Is the instance of class 2? | |
CM/codexglue_code2text_javascript | 2023-04-22T01:51:42.000Z | [
"region:us"
] | CM | null | null | null | 1 | 81 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
splits:
- name: train
num_bytes: 160860431
num_examples: 58025
- name: validation
num_bytes: 10337344
num_examples: 3885
- name: test
num_bytes: 10190713
num_examples: 3291
download_size: 65795549
dataset_size: 181388488
---
# Dataset Card for "codexglue_code2text_javascript"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
edarchimbaud/extended-trading-stocks | 2023-10-07T23:16:04.000Z | [
"task_categories:tabular-regression",
"language:en",
"license:mit",
"region:us"
] | edarchimbaud | null | null | null | 2 | 81 | ---
language:
- en
license: mit
task_categories:
- tabular-regression
dataset_info:
features:
- name: symbol
dtype: string
- name: date
dtype: string
- name: time
dtype: string
- name: price
dtype: float64
- name: share_volume
dtype: string
splits:
- name: train
num_bytes: 84424594
num_examples: 1799127
download_size: 14924159
dataset_size: 84424594
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "extended-trading-sp500"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://edarchimbaud.substack.com
- **Repository:** https://github.com/edarchimbaud
- **Point of Contact:** contact@edarchimbaud.com
### Dataset Summary
The extended-trading-sp500 dataset contains detailed information on the extended trading of the S&P 500 index.
### Supported Tasks and Leaderboards
[N/A]
### Languages
[N/A]
## Dataset Structure
### Data Instances
[N/A]
### Data Fields
- symbol (string): A string representing the ticker symbol or abbreviation used to identify the company.
- date (string): A string representing the date of the trading.
- time (string): A string representing the time of the trading.
- price (float64): A floating-point number representing the price of the stock at the given date and time.
- share_volume (string): A string representing the volume of shares traded during this time.
### Data Splits
[N/A]
## Dataset Creation
### Curation Rationale
The extended-trading-sp500 dataset was developed to support research into after-hours trading patterns and behaviors.
### Source Data
#### Initial Data Collection and Normalization
This data was sourced from various trading platforms and aggregated for this dataset.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
The extended-trading-sp500 dataset was collected by https://edarchimbaud.substack.com.
### Licensing Information
The extended-trading-sp500 dataset is licensed under the MIT License.
### Citation Information
> https://edarchimbaud.substack.com, extended-trading-sp500 dataset, GitHub repository, https://github.com/edarchimbaud
### Contributions
Thanks to [@edarchimbaud](https://github.com/edarchimbaud) for adding this dataset. |
Andyrasika/alpaca-bitcoin-sentiment-dataset | 2023-07-15T10:22:53.000Z | [
"license:apache-2.0",
"region:us"
] | Andyrasika | null | null | null | 3 | 81 | ---
license: apache-2.0
---
|
mertkarabacak/NCDB-Meningioma | 2023-09-18T19:25:32.000Z | [
"region:us"
] | mertkarabacak | null | null | null | 0 | 81 | Entry not found |
distil-whisper/spgispeech-timestamped | 2023-09-25T10:28:51.000Z | [
"task_categories:automatic-speech-recognition",
"language:en",
"license:other",
"region:us"
] | distil-whisper | The SPGISpeech corpus is derived from company earnings calls manually transcribed by S&P Global, Inc. according to a pro- fessional style guide detailing conventions for capitalization, punctuation, denormalization of non-standard words and tran- scription of disfluencies in spontaneous speech. The basic unit of SPGISpeech is a pair consisting of a 5 to 15 second long 16 bit, 16kHz mono wav audio file and its transcription.. | @ARTICLE{2021arXiv210402014O,
author = {{O'Neill}, Patrick K. and {Lavrukhin}, Vitaly and {Majumdar}, Somshubra and {Noroozi}, Vahid and {Zhang}, Yuekai and {Kuchaiev}, Oleksii and {Balam}, Jagadeesh and {Dovzhenko}, Yuliya and {Freyberg}, Keenan and {Shulman}, Michael D. and {Ginsburg}, Boris and {Watanabe}, Shinji and {Kucsko}, Georg},
title = "{SPGISpeech: 5,000 hours of transcribed financial audio for fully formatted end-to-end speech recognition}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language, Electrical Engineering and Systems Science - Audio and Speech Processing},
year = 2021,
month = apr,
eid = {arXiv:2104.02014},
pages = {arXiv:2104.02014},
archivePrefix = {arXiv},
eprint = {2104.02014},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2021arXiv210402014O},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
} | null | 0 | 81 | ---
license: other
task_categories:
- automatic-speech-recognition
language:
- en
extra_gated_prompt: |-
Your access to and use of the information in the Kensho Transcript Dataset (the “Content”), which is provided by Kensho Technologies, LLC, a subsidiary of S&P Global, Inc., (“Kensho”), shall be governed by the following terms and conditions of usage (“Terms of Usage”). The Content may be accessed only by persons who have been authorized to use this Content pursuant to their acceptance and acknowledgement of these Terms of Usage (in each case, an “Authorized User”). By providing your electronic signature at the end of these Terms of Usage, you represent that you are an Authorized User and that you accept these Terms of Usage and agree to be bound by them.
If you do not wish to be bound by these Terms of Usage, you must not use this Content. PLEASE READ THESE TERMS OF USAGE CAREFULLY BEFORE USING THIS CONTENT.
Section 1 – THE CONTENT
1.1 The Content is provided for academic research purposes and internal use only and must not be used to: assemble or create a database; construct or facilitate the construction of products which compete with the Content; identify or attempt to identify or contact any individual; or link to another dataset.
The Content, which is comprised of public earnings calls in audio and corresponding text format, and all accompanying derived products is proprietary to Kensho and its third-party content providers. You shall not modify the Content; create derivative works based on the Content, rewrite or reprocess the Content except as expressly provided herein. You must not publish, display, transfer or redistribute the Content or any portions or derivative versions thereof to anyone without prior written consent from Kensho. You agree not to contact Kensho or its affiliates concerning individuals whose information may be included in the Content.
1.2 Disclaimer. Content to which you are provided access, either directly or indirectly, from or on this Content will not have been reviewed or monitored by Kensho, and Kensho cannot and does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any such content.
The Content is provided for your convenience only and is not a republication or reconfirmation of the opinion or information contained therein. The provision of the Content is without any obligation on the part of Kensho or its third-party content providers to review such or any liability or responsibility arising out of your use thereof. Kensho does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any Content and shall not be liable for any errors, delays, or actions taken in reliance on information. In addition, the Content speaks only as of the date issued and is based on conference calls that may contain projections of other forward-looking statements. You should not rely on the Content as expressing Kensho’s opinion or as representing current information. None of Kensho or the third-party content providers has undertaken, and do not undertake any duty to update any Content or otherwise advise you of any changes in the Content.
1.3 Ownership of Third-Party Content. You acknowledge that all proprietary rights in the Content that are owned by Kensho or third party content providers shall remain the property of Kensho or such third party content providers, and you shall have no right or interest in such third party content except the rights to use such third party content in accordance with these Terms of Usage. Any additional rights not granted herein shall require a separate, direct agreement with Kensho. You acknowledge that the Content and third party content as compiled, prepared, selected and arranged by Kensho or its third party content providers constitutes an expenditure of substantial time, effort and money by Kensho and its third party content providers and constitutes valuable commercial property and/or trade secrets of Kensho and such third party content providers. Kensho retains all rights and remedies afforded under the copyright, trademark, service mark, patent and other laws of the United States and the States thereof, including without limitation any laws designed to protect proprietary or confidential information. You agree that you will not remove or modify any copyright notice, disclosures, disclaimers or other notification or trade name or marks of Kensho or the third party content providers that may appear in the Content or third party content and that any permitted reproduction and/or distribution of the Content or third party content shall contain such notices and/or marks as they appear in the Content or third party content. You may not use Kensho’s or the third-party content providers’ name or trademarks without the prior written consent of Kensho or such third-party content providers. Apart from the rights granted hereunder, no conveyance of ownership, right, title or interest is intended herein. Any additional rights require a separate agreement with Kensho.
1.4 Posted Guidelines. In addition to these Terms of Usage, when using this Content, you shall be subject to and agree to follow any posted notice, guidelines or rules, which may be posted and amended from time to time. Nothing on this Content shall be considered a recommendation or solicitation to buy or an offer to sell a security to any person in any jurisdiction.
1.5 Registration Data. In consideration of your use of this Content, you and/or your employer agree to: (a) provide true, accurate, current and complete Registration Data (as defined below in Section 3.1) to Kensho as prompted by the registration form completed prior to accessing the Content and (b) maintain and promptly update the Registration Data and to keep the same true, accurate, current and complete.
1.6 Right to Terminate User Access. Kensho reserves the right to limit, restrict and immediately terminate your access to and use of this Content at any time, in whole or in part, in its sole discretion and without notice.
Section 2 - DISCLAIMER OF WARRANTY AND LIMITATION OF LIABILITY
2.1 THE CONTENT IS PROVIDED “AS IS” AND “AS AVAILABLE” WITHOUT REPRESENTATION OR WARRANTY OF ANY KIND. USE OF THE CONTENT IS AT THE USER’S OWN RISK. IN NO EVENT SHALL KENSHO OR ITS THIRD-PARTY CONTENT PROVIDERS BE LIABLE FOR ANY DECISION MADE OR ACTION OR INACTION TAKEN IN RELIANCE ON ANY CONTENT, INCLUDING THIRD-PARTY CONTENT, INCLUDING YOUR HANDLING AND STORING OF THE CONTENT. KENSHO FURTHER EXPLICITLY DISCLAIMS, ANY WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OF ORIGINALITY, ACCURACY, COMPLETENESS, TIMELINESS, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. KENSHO EXPRESSLY DISCLAIMS, AND YOU WAIVE, ANY LIABILITY THAT MAY ARISE FROM YOUR PUBLICATION OR PROVISION OF THE CONTENT TO A THIRD PARTY, OR ANY REPRESENTATION OR WARRANTY MADE BY YOU TO ANY THIRD PARTY, WHETHER OR NOT RELATED TO THE CONTENT. KENSHO, SUPPLIERS OF THIRD-PARTY CONTENT AND ANY OTHER THIRD PARTY WORKING WITH KENSHO SHALL NOT BE RESPONSIBLE OR LIABLE, DIRECTLY OR INDIRECTLY, FOR ANY DAMAGES OR LOSS (INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL AND ANY AND ALL OTHER FORMS OF DAMAGES OR LOSSES REGARDLESS OF THE FORM OF THE ACTION OR THE BASIS OF THE CLAIM) CAUSED OR ALLEGED TO BE CAUSED IN CONNECTION WITH YOUR USE OF THE CONTENT WHETHER OR NOT FORESEEABLE, EVEN IF KENSHO OR ANY OF THE SUPPLIERS OF THIRD-PARTY CONTENT OR OTHER THIRD PARTIES WORKING WITH KENSHO IN CONNECTION WITH THE CONTENT HAS BEEN ADVISED OF THE POSSIBILITY OR LIKELIHOOD OF SUCH DAMAGES.
2.2 THE CONTENT IS NOT INTENDED TO PROVIDE TAX, LEGAL, INSURANCE OR INVESTMENT ADVICE, AND NOTHING IN THE CONTENT SHOULD BE CONSTRUED AS AN OFFER TO SELL, A SOLICITATION OF AN OFFER TO BUY, OR A RECOMMENDATION FOR ANY SECURITY BY KENSHO OR ANY THIRD PARTY.
2.3 For third party demands, claims, actions, proceedings and liability for losses, damages, reasonable legal costs and other reasonable expenses of any nature, you agree to defend, indemnify and hold Kensho and its affiliates harmless, including its respective directors, officers, employees and agents from and against all claims to the extent arising from your access to and/or use of the Content, any failure by you to abide by the Terms of Usage, or breach of applicable law.
Section 3 - PRIVACY
3.1 Access and Collection. In order to access this Content, during the registration process, either you or your employer will be required to provide Kensho with certain information; including your name, employer or academic institution, and e-mail address (“Registration Data”). In addition, when you request or view Content, Kensho may obtain user identifiable information related to your request of, or access to, such Content (“Access Data”). For example, while you are accessing this Content, our Web servers may recognize your: (a) domain name; (b) ISP’s domain name; (c) IP address; (d) browser type; and (e) operating system. If you contact us with a technical question, we may collect certain information about your systems, including: (a) your browser type, version and settings (e.g., Java and cookie settings); (b) connectivity information (e.g., SSL/HTTPS compatibility, bandwidth capacity); and browser plug-in information (e.g., do you have Adobe, what is your media player, can you open Flash files, etc.).
3.2 Use of Your Information. Registration Data and Access Data may be used by Kensho for research and development purposes and to communicate with users and to troubleshoot any technical issues pertaining to the Content. You acknowledge that in the event that a separate agreement is required, Kensho may share Registration Data with its Affiliates (as defined below).
3.3 Disclosure of Your Information. Except as otherwise noted herein, Kensho will not disclose, rent or sell personal information collected from or about you without your permission. For the purposes specified in the preceding paragraph, we may transfer or disclose Registration Data and Access Data to S&P Global Inc. and its affiliates (“Kensho Affiliates”) and third parties who are contracted to perform services on behalf of Kensho, such as those who assist Kensho in bringing you this Content and providing you with certain features and functionality included within or accessible via this Content. We may also disclose Registration Data and Access Data to Kensho Affiliates and third parties in connection with their providing you access to this Content. Disclosures to these third parties will be subject to confidentiality agreements and, where required, governed by contract. Kensho may also be required to disclose information to governmental, regulatory or self-regulatory entities or agencies in response to regulatory inquiries or to comply with applicable laws, rules, regulations, orders, subpoenas or other legal processes.
3.4 Consent. By (a) agreeing to these Terms of Usage, or (b) by using this Content, and, in either case, providing any information that may be required, requested or otherwise collected by us as set forth above, you freely consent to Kensho processing your information in the United States and in other countries and territories for the purposes set out in these Terms of Usage, and you also consent to the transfer of your information for such purposes to any third party content provider wherever such entity may from time to time be located and to any third parties as described above and in accordance with applicable law and regulations. If you do not permit Kensho to collect any of your information or do not agree with any of the terms and conditions of these Terms of Usage, you should not use this Content and should exit this page and/or Content, as the case may be. If after registering with Kensho, you desire to withdraw the consent granted in this Section 3.4 for all future use of your information by Kensho, you must notify Kensho in writing at the address listed below in Section 3.8 and immediately cease use of this Content.
3.5 Inquiries. If you have any questions regarding these Terms of Usage or your information that is held by us, please contact Kensho in writing using the contact information provided below. If we receive a request regarding your personal information held by us, we will use reasonable means to provide you with such information that we can reasonably compile. You will be given the opportunity to rectify any inaccuracies in such information.
3.6 Encryption. Kensho may use encryption technology to protect certain transmissions of data to/from this Content, but e-mail and other communications, unless otherwise noted on this Content, are not encrypted to/from this Content. Therefore, you should not send any personal or identifying information, such as account numbers, credit card numbers, Social Security numbers, passwords, etc., to Kensho via e-mail. By utilizing e-mail or other electronic communication means you acknowledge that you have no expectation of privacy with respect to the information delivered thereby and that Kensho will not be responsible for any loss or damage that could result from interception by third parties of any information so sent.
3.7 Contact Information. In the event you have any questions regarding these Terms of Use, this Privacy Statement or to make any requests or queries regarding your information that is held by us you may contact us in writing at privacy@kensho.com or Kensho Technologies LLC, Attn: General Counsel, 55 Water Street, New York, NY 10041.
Section 4 - MISCELLANEOUS
4.1 Entire Agreement. These Terms of Usage constitute the entire agreement of the parties hereto with respect to the subject matter hereof and supersede all prior agreements and undertakings, both written and oral, between the parties with respect to the subject matter hereof.
4.2 Severability. If any term or other provision of these Terms of Usage is invalid, illegal or incapable of being enforced by any law or public policy, all other terms and provisions of these Terms of Usage shall nevertheless remain in full force and effect so long as the economic or legal substance of the transactions contemplated hereby is not affected in any manner materially adverse to any party.
4.3 Governing Law; Forum. These Terms of Usage shall be governed in all respects by the laws of the State of New York, and any litigation arising out of or connected in any way with these Terms of Usage shall take place in a State or Federal court of competent jurisdiction in New York County, State of New York.
4.4 Waiver of Jury Trial. YOU WAIVE TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW ANY RIGHT YOU MAY HAVE TO A TRIAL BY JURY WITH RESPECT TO ANY ACTIONS OR PROCEEDINGS DIRECTLY OR INDIRECTLY ARISING OUT OF, UNDER OR IN CONNECTION WITH THESE TERMS OF USAGE.
4.5 Conflict. In the event of a conflict between these Terms of Use and any other agreement with Kensho that relates to Third-Party Content, the more restrictive terms shall prevail.
extra_gated_fields:
Full name: text
Email: text
Institution: text
I accept the Terms of Usage: checkbox
---
# Distil Whisper: SPGISpeech With Timestamps
This is a variant of the [SPGISpeech](https://huggingface.co/datasets/kensho/spgispeech) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/kensho/spgispeech).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/spgispeech", "L")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/spgispeech", "L", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original [dataset card](https://huggingface.co/datasets/kensho/spgispeech).
|
dmrau/trec_dl20-qrels | 2023-10-09T08:28:57.000Z | [
"region:us"
] | dmrau | null | null | null | 0 | 81 | ---
dataset_info:
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: string
splits:
- name: test
num_bytes: 298319
num_examples: 11386
download_size: 0
dataset_size: 298319
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
# Dataset Card for "trec_dl20-qrels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ARTeLab/ilpost | 2022-11-17T02:50:32.000Z | [
"task_categories:summarization",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"language:it",
"region:us"
] | ARTeLab | null | null | null | 1 | 80 | ---
language:
- it
multilinguality:
- monolingual
size_categories:
- 10K<n<100k
task_categories:
- summarization
---
# Dataset Card for ilpost
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
IlPost dataset, containing news articles taken from IlPost.
There are two features:
- source: Input news article.
- target: Summary of the article.
### Supported Tasks and Leaderboards
- `abstractive-summarization`, `summarization`
### Languages
The text in the dataset is in Italian
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
More details and results in [published work](https://www.mdpi.com/2078-2489/13/5/228)
```
@Article{info13050228,
AUTHOR = {Landro, Nicola and Gallo, Ignazio and La Grassa, Riccardo and Federici, Edoardo},
TITLE = {Two New Datasets for Italian-Language Abstractive Text Summarization},
JOURNAL = {Information},
VOLUME = {13},
YEAR = {2022},
NUMBER = {5},
ARTICLE-NUMBER = {228},
URL = {https://www.mdpi.com/2078-2489/13/5/228},
ISSN = {2078-2489},
ABSTRACT = {Text summarization aims to produce a short summary containing relevant parts from a given text. Due to the lack of data for abstractive summarization on low-resource languages such as Italian, we propose two new original datasets collected from two Italian news websites with multi-sentence summaries and corresponding articles, and from a dataset obtained by machine translation of a Spanish summarization dataset. These two datasets are currently the only two available in Italian for this task. To evaluate the quality of these two datasets, we used them to train a T5-base model and an mBART model, obtaining good results with both. To better evaluate the results obtained, we also compared the same models trained on automatically translated datasets, and the resulting summaries in the same training language, with the automatically translated summaries, which demonstrated the superiority of the models obtained from the proposed datasets.},
DOI = {10.3390/info13050228}
}
``` |
ITESM/embedded_faqs_medicare | 2022-06-14T22:06:28.000Z | [
"region:us"
] | ITESM | null | null | null | 0 | 80 | Entry not found |
gonzalobenegas/clinvar | 2023-02-09T23:32:45.000Z | [
"region:us"
] | gonzalobenegas | null | null | null | 0 | 80 | Entry not found |
vietgpt/news_summarization_vi | 2023-07-04T05:30:39.000Z | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:vi",
"LM",
"region:us"
] | vietgpt | null | null | null | 0 | 80 | ---
dataset_info:
features:
- name: content
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 213621060.91992286
num_examples: 65361
- name: test
num_bytes: 3268326.0800771536
num_examples: 1000
download_size: 117151304
dataset_size: 216889387
language:
- vi
task_categories:
- summarization
size_categories:
- 10K<n<100K
tags:
- LM
---
# Summarization
- Source: https://github.com/binhvq/news-corpus
- Language: Vietnamese
- Labeling: text-davinci-003
- Num examples:
- 65,361 (train)
- 10,000 (test)
```python
from datasets import load_dataset
load_dataset("tdtunlp/news_summarization_vi")
```
- Format for Summarization task
```python
def preprocess(
sample,
sep_key="<|endofprompt|>",
end_key="<|endoftext|>",
):
article = sample['content']
completion = sample['summary']
return {'text': """{article}
TL;DR:
{sep_key}
{completion}
{end_key}""".format(
article=article,
sep_key=sep_key,
completion=completion,
end_key=end_key,
)}
"""
Suốt bao năm, để dòng tranh này không bị rơi vào quên lãng, mỗi ngày người ta đều thấy ông Đạt cặm cụi làm nên những bức tranh từ mũi dao, cán đục. Ông bảo, tranh sơn khắc ở nước ta ra đời sớm nhất và còn đẹp hơn cả tranh sơn khắc của Nhật. Quý giá như vậy nên ông chẳng thể để nghề mai một trong sự chông chênh của thời cuộc.
Một trong những sản phẩm sơn khắc của ông Đạt được trả 25 triệu.
Theo ông Đạt, thời điểm năm 1945 đến 1995 là lúc tranh sơn khắc ở nước ta phát triển mạnh nhất. Thời điểm đó, các sản phẩm của Hạ Thái chiếm tới 70% hàng xuất khẩu, giải quyết được công ăn việc làm cho người dân trong làng và cả các địa phương khác, đem lại cuộc sống khấm khá cho nhiều hộ gia đình.
Say mê hội họa từ nhỏ, nên chuyện ông Đạt đến với tranh sơn khắc như một mối duyên tiền định. Khi mới tiếp xúc với những bức tranh này, ông Đạt như bị lôi cuốn chẳng thể nào dứt ra được. Học hết cấp 3, tôi thi vào Đại học sư phạm nhưng sức khỏe không đảm bảo nên xin vào làm thợ vẽ trong xưởng của hợp tác xã.
Năm 1979, tôi được hợp tác xã cử đi học thêm ở trường Mỹ Nghệ. Khi về lại xưởng, nhờ năng khiếu hội họa nên tôi được chuyển sang khâu đoạn khảm trai rồi sang tranh khắc. Tôi làm tranh khắc từ đó đến giờ ông Đạt chia sẻ.
Theo lời ông Đạt, học sơn khắc khó bởi cách vẽ của dòng tranh này khác hẳn với sơn mài. Nếu như sơn mài người ta có thể vẽ bằng chổi hay bút lông, cũng có khi là chất liệu mềm rồi mới quét sơn lên vóc thì sơn khắc khâu đoạn lại làm khác hẳn. Sơn khắc là nghệ thuật của đồ họa, sự hoàn thiện của bức tranh phụ thuộc vào những nét chạm khắc và những mảng hình tinh tế, giàu cảm xúc. Cuối cùng mới là việc tô màu nhằm tạo sự khắc họa mạnh.
Như một lẽ xoay vần tự nhiên, sự phát triển của làng nghề Hạ Thái dần chùng xuống. Làng nghề bước vào thời kỳ suy thoái, đặc biệt là trong giai đoạn khủng hoảng kinh tế Đông Âu từ 1984 đến 1990 đã làm hợp tác xã tan rã. Ông Đạt khi đó cũng như bao người thợ khác đều phải quay về làm ruộng.
Ông Đạt giải thích, tranh sơn khắc xuất phát từ gốc tranh sơn mài. Nếu như ở tranh sơn mài thông thường, để có một tấm vóc vẽ người ta phủ sơn ta, vải lên tấm gỗ và mài phẳng thì tranh sơn khắc độc đáo ở chỗ, phải sử dụng kỹ thuật thủ công để khắc lên tấm vóc sơn mài.
Tranh sơn khắc từ phôi thai, phác thảo đến lúc hoàn thành có khi kéo dài cả năm trời. Chẳng hạn, riêng công khắc ở bức tranh khổ nhỏ thường tôi làm cả ngày lẫn đêm thì mất 2 ngày, phối màu mất 3 ngày. Để người trẻ học được nghề cũng sẽ mất khoảng 6 tháng đến 1 năm - ông Trần Thành Đạt chia sẻ.
Tranh sơn khắc đòi hỏi rất kỹ về phác thảo, bố cục, cũng như mảng màu sáng tối mà màu đen của vóc là chủ đạo. Dù trên diện tích bức tranh khổ lớn bao nhiêu nó vẫn rất cần kỹ càng và chính xác đến từng xen-ti-met. Nếu sai, bức tranh sẽ gần như bị hỏng, các đường nét phải khắc họa lại từ đầu.
Kỳ công là vậy nên giá thành mỗi sản phẩm sơn khắc thường khá cao, trung bình từ 4 đến 25 triệu đồng/bức tranh. Giá thành cao lại yêu cầu khắt khe về mặt kỹ thuật, mỹ thuật nên theo Nghệ nhân Trần Thành Đạt, nhiều người trong làng đã từ bỏ, không làm dòng tranh này nữa.
Tranh sơn khắc làm mất nhiều thời gian và công sức nhưng khó bán. Họ đều tập trung làm tranh sơn mài, với chất liệu ngoại nhập cho rẻ và ít tốn công sức. Hầu như cả làng đã quay lưng, bỏ rơi dòng tranh sơn khắc vào lãng quên ông Đạt buồn bã kể.
Được biết, hiện xưởng sản xuất tranh của ông Đạt chủ yếu là các thành viên trong gia đình. Ông khoe, hai con trai và con gái đều tốt nghiệp Trường Đại học Mĩ thuật, con rể và các con dâu cũng là họa sĩ của trường. Tất cả các thành viên trong gia đình ông đều chung niềm say mê với sơn khắc.
Đinh Luyện.
TL;DR:
<|endofprompt|>
Ông Đạt Trần Thành là một trong những nghệ nhân sơn khắc của làng nghề Hạ Thái, Hà Nội. Từ năm 1945 đến 1995, ông Đạt đã nỗ lực bảo vệ dòng tranh sơn khắc của nước ta không bị rơi vào quên lãng. Ông Đạt cũng là người đã giới thiệu tranh sơn khắc của nước ta đến với nhiều quốc gia khác. Tuy nhiên, trong giai đoạn khủng hoảng kinh tế Đông Âu từ 1984 đến 1990, làng nghề Hạ Thái đã bước vào thời kỳ suy thoái. Ông Đạt và nhiều người thợ khác đã phải quay về làm ruộng. Tuy nhiên, ông Đạt vẫn nỗ lực bảo vệ dòng tranh sơn khắc của nước ta. Hiện nay, ông Đạt đã truyền cảm hứng và kỹ năng sơn khắc cho các thành viên trong gia đình.
<|endoftext|>
"""
``` |
venetis/symptom_text_to_disease_mk3 | 2023-03-09T13:24:21.000Z | [
"region:us"
] | venetis | null | null | null | 0 | 80 | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
dtype:
class_label:
names:
'0': emotional pain
'1': hair falling out
'2': heart hurts
'3': infected wound
'4': foot ache
'5': shoulder pain
'6': injury from sports
'7': skin issue
'8': stomach ache
'9': knee pain
'10': joint pain
'11': hard to breath
'12': head ache
'13': body feels weak
'14': feeling dizzy
'15': back pain
'16': open wound
'17': internal pain
'18': blurry vision
'19': acne
'20': muscle pain
'21': neck pain
'22': cough
'23': ear ache
'24': feeling cold
splits:
- name: train
num_bytes: 330494.3762197868
num_examples: 5328
- name: test
num_bytes: 41373.82675273983
num_examples: 667
- name: valid
num_bytes: 41311.79702747335
num_examples: 666
download_size: 146385
dataset_size: 413180.0
---
# Dataset Card for "symptom_text_to_disease_mk3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
distil-whisper/tedlium-long-form | 2023-05-22T14:04:04.000Z | [
"region:us"
] | distil-whisper | null | null | null | 0 | 80 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: speaker_id
dtype: string
splits:
- name: validation
num_bytes: 180166870.0
num_examples: 8
- name: test
num_bytes: 285107770.0
num_examples: 11
download_size: 284926490
dataset_size: 465274640.0
---
# Dataset Card for "tedlium-long-form"
To create the dataset:
```python
import os
import numpy as np
from datasets import load_dataset, DatasetDict, Dataset, Audio
import soundfile as sf
from tqdm import tqdm
tedlium = load_dataset("LIUM/tedlium", "release3")
merged_dataset = DatasetDict()
validation_speaker_ids = [
"Al_Gore",
"Barry_Schwartz",
"Blaise_Agueray_Arcas",
"Brian_Cox",
"Craig_Venter",
"David_Merrill",
"Elizabeth_Gilbert",
"Wade_Davis",
]
validation_dataset_merged = {speaker_id: {"audio": [], "text": ""} for speaker_id in validation_speaker_ids}
test_speaker_ids = [
"AimeeMullins",
"BillGates",
"DanBarber",
"DanBarber_2010_S103",
"DanielKahneman",
"EricMead_2009P_EricMead",
"GaryFlake",
"JamesCameron",
"JaneMcGonigal",
"MichaelSpecter",
"RobertGupta",
]
test_dataset_merged = {speaker_id: {"audio": [], "text": ""} for speaker_id in test_speaker_ids}
for split, dataset in zip(["validation", "test"], [validation_dataset_merged, test_dataset_merged]):
sampling_rate = tedlium[split].features["audio"].sampling_rate
for sample in tqdm(tedlium[split]):
if sample["speaker_id"] in dataset:
dataset[sample["speaker_id"]]["audio"].extend(sample["audio"]["array"])
dataset[sample["speaker_id"]]["text"] += " " + sample["text"]
audio_paths = []
os.makedirs(split, exist_ok=True)
for speaker in dataset:
path = os.path.join(split, f"{speaker}-merged.wav")
audio_paths.append(path)
sf.write(path, np.asarray(dataset[speaker]["audio"]), samplerate=sampling_rate)
merged_dataset[split] = Dataset.from_dict({"audio": audio_paths}).cast_column("audio", Audio())
# remove spaced apostrophes (e.g. it 's -> it's)
merged_dataset[split] = merged_dataset[split].add_column("text", [dataset[speaker]["text"].replace(" '", "'") for speaker in dataset])
merged_dataset[split] = merged_dataset[split].add_column("speaker_id", dataset.keys())
``` |
clarin-knext/hotpotqa-pl | 2023-06-07T08:13:33.000Z | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | null | 0 | 80 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl |
tuetschek/atis | 2023-06-11T18:24:58.000Z | [
"region:us"
] | tuetschek | null | null | null | 0 | 80 | Entry not found |
flozi00/conversations | 2023-10-05T08:48:09.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"language:de",
"region:us"
] | flozi00 | null | null | null | 9 | 80 | ---
language:
- de
task_categories:
- conversational
- text-generation
dataset_info:
features:
- name: conversations
dtype: string
- name: from
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 267894207
num_examples: 154491
download_size: 137401054
dataset_size: 267894207
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
https://github.com/flozi00/chat-data-experiments/blob/main/chat_combiner.py |
qgyd2021/chinese_ner_sft | 2023-10-07T11:36:27.000Z | [
"task_categories:token-classification",
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:100M<n<1B",
"language:zh",
"license:apache-2.0",
"ner",
"region:us"
] | qgyd2021 | null | @dataset{chinese_ner_sft,
author = {Xing Tian},
title = {chinese_ner_sft},
month = sep,
year = 2023,
publisher = {Xing Tian},
version = {1.0},
} | null | 8 | 80 | ---
task_categories:
- token-classification
- question-answering
- text-generation
- text2text-generation
language:
- zh
tags:
- ner
size_categories:
- 100M<n<1B
license: apache-2.0
---
## 中文实体识别指令数据集
收集开源的实体识别数据集, 将其制作为 sft 数据集用于 LLM 微调.
该数据集的目的是构建通用实体识别的LLM研究.
数据集分为三大类:
`{dataset_name}`, `{dataset_name}_template`, `{dataset_name}_prompt`.
* `{dataset_name}`: 为对应的实体识别数据集.
* `{dataset_name}_template`: 是为各数据集编写的 prompt 模板, 因为各数据集的主题不同, 所以模板分别编写会更加准确.
* `{dataset_name}_prompt`: 是根据 `{dataset_name}` 和 `{dataset_name}_template` 合成的 prompt 数据集. 由于是动态生成的 huggingface 可能无法展示, 以下是一些数据示例.
<details>
<summary>数据示例展开查看</summary>
<pre><code>
{
"prompt": "在做手机智能助手上, 你需要识别用户话语中的关键实体, 实体类型包括:\n联系人姓名,场景,主旋律,乐器名称,曲风,手机号码,语言,时代,目的地,流行榜单,情绪,出发地点,曲名\n\n用户的话语为:\n来一首周华健的花心\n\n请按以下格式输出实体:\nText: 实体的原始文本, 从原句子中截取.\nLabel: 实体类型, 以上给出的实体类型之一.\n\nTips:\n1. 可同时输出多个实体.\n2. 没有实体时可回答: 找不到任何相关的实体.",
"response": "Text: 花心\nLabel: 曲名"
}
{
"prompt": "在手机或电脑助手的场景中, 我们需要识别用户指令中的实体.\n请给出以下句子中所有的主旋律,情感,时期,地址,曲子实体.\n------------\n播放我们都一样\n------------\n\n注意:\n1. 输出时如果有多个实体, 每行一个实体.\n\noutput:",
"response": "曲子: 我们都一样"
}
{
"prompt": "在做手机智能助手上, 你需要识别用户话语中的关键实体, 实体类型包括:\n流行榜单,曲名,乐器,年代,情感,电话号码,情景,语言,地址,音乐风格,歌手,音乐主题,联系人,出发地点,地址\n\n用户的话语为:\n播放不一样来一首我们不一样\n\n请按以下格式输出实体:\nText: 实体的原始文本, 从原句子中截取.\nLabel: 实体类型, 以上给出的实体类型之一.\n\nTips:\n1. 可同时输出多个实体.\n2. 没有实体时可回答: 找不到任何相关的实体.",
"response": "Text: 不一样\nLabel: 曲子\nText: 我们不一样\nLabel: 曲名"
}
{
"prompt": "我们在做一个手机智能助手, 它需要识别用户的指令中的实体, 包括:\n目的地,主题,时代,电话号码,乐器,歌手,音乐风格,始发地点,语言,联系人,曲子,情绪\n\n现在请你给出以下句子中所包含的实体.\n到信阳市汽车配件城\n\n请按以下格式输出:\nEntityText: 实体的原始文本.\nEntityLabel: 实体标签, 必须是以上给定的其中之一.\n\n可同时输出多个实体.",
"response": "EntityText: 信阳市汽车配件城\nEntityLabel: 目的地"
}
{
"prompt": "我们在做一个手机智能助手, 它需要识别用户的指令中的实体, 包括:\n地址,出发地点,时代,演唱者,情绪,音乐主题,热门榜单,电话号码,联系人姓名,语种\n\n现在请你给出以下句子中所包含的实体.\n米静\n\n请按以下格式输出:\nEntityText: 实体的原始文本.\nEntityLabel: 实体标签, 必须是以上给定的其中之一.\n\n可同时输出多个实体.",
"response": "没有从`米静`中找到任何关于`地址,出发地点,时代,演唱者,情绪,音乐主题,热门榜单,电话号码,联系人姓名,语种`的实体. "
}
{
"prompt": "在手机或电脑助手的场景中, 我们需要识别用户指令中的实体.\n请给出以下句子中所有的情绪,场景,联系人,歌曲,年代,地址实体.\n------------\n信阳汽配城\n------------\n\n注意:\n1. 输出时如果有多个实体, 每行一个实体.\n\noutput:",
"response": "地址: 信阳汽配城"
}
</code></pre>
</details>
提示:
* 可能是数据太多, `subset` 太多, `Dataset Viewer` 总是无效.
你可以查看文件 `chinese_ner_sft.py` 中 `_entity_urls`, `_template_urls`, `_prompt_urls` 的 `keys` 来判断哪些 `subset` 是可用的.
**欢迎在 Community 中分享你的 prompt 范式, 我会添加**.
数据集从网上收集整理如下:
| 数据 | 原始数据/项目地址 | 样本个数 | 实体类型 | 原始数据描述 | 替代数据下载地址 |
| :--- | :---: | :---: | :---: | :---: | :---: |
| CMeEE | [CBLUE](http://www.cips-chip.org.cn/2021/CBLUE); [天池下载](https://tianchi.aliyun.com/dataset/95414) | 20000 | 儿科疾病, 身体部位, 临床表现, 医疗程序, 等 9 大类医学实体 | 医学实体识别任务 | [nlhappy/CMeEE](https://huggingface.co/datasets/nlhappy/CMeEE) [Rosenberg/CMeEE-V2](https://huggingface.co/datasets/Rosenberg/CMeEE-V2) |
| CCKS2019_task1 | [Yidu-S4K](http://openkg.cn/dataset/yidu-s4k) | 1379 | 解剖部位, 手术, 疾病和诊断, 药物, 实验室检验, 影像检查 | CCKS2019面向中文电子病历的命名实体识别数据集 | |
| CLUENER2020 | [CLUE](https://www.cluebenchmarks.com/introduce.html); [CLUENER](https://storage.googleapis.com/cluebenchmark/tasks/cluener_public.zip) | 12091 | 游戏, 组织, 政府, 电影, 人名, 书籍, 公司, 场景, 职位, 地址 | CLUENER2020数据集 | |
| MSRA | [MSRA](https://www.msra.cn/) | 48442 | 地址, 组织, 人名 | MSRA微软亚洲研究院开源命名实体识别数据集 | [doushabao4766/msra_ner_k_V3_wc_bioes](https://huggingface.co/datasets/doushabao4766/msra_ner_k_V3_wc_bioes) |
| NLPCC2018_task4 | [NLPCC2018](http://tcci.ccf.org.cn/conference/2018/taskdata.php); [NLPCC2018_task4](http://tcci.ccf.org.cn/conference/2018/dldoc/trainingdata04.zip) | 21352 | 歌手, 歌曲, 主题, 情感, 风格, 目的地, 电话号码, 乐器, 聊系人, 年龄, 热门列表, 自定义目的地, 语种, 场景, 出发地 | 任务型对话系统数据数据集 | |
| CCFBDCI | [CCFBDCI填写申请表后可下载](https://www.datafountain.cn/competitions/510/datasets) | 15723 | LOC、GPE、ORG和PER | 中文命名实体识别算法鲁棒性评测数据集 | |
| MMC | [MMC](https://tianchi.aliyun.com/competition/entrance/231687/information) [MMC数据集](https://aistudio.baidu.com/datasetdetail/146995) | 3498 | 实体类型 | 瑞金医院MMC人工智能辅助构建知识图谱大赛数据集 | |
| WeiBo | [WeiBo](https://github.com/hltcoe/golden-horse/tree/master) | 1890 | LOC.NAM、LOC.NOM、PER.NAM、ORG.NOM、ORG.NAM、GPE.NAM和PER.NOM | 社交媒体中文命名实体识别数据集 | |
| ECommerce | [ECommerce](https://github.com/allanj/ner_incomplete_annotation/tree/master) | 7998 | MISC、XH、HPPX和HCCX | 面向电商的命名实体识别数据集 | |
| YouKu | [YouKu](https://github.com/allanj/ner_incomplete_annotation/tree/master) | | MISC、XH、HPPX和HCCX | 面向电商的命名实体识别数据集 | |
| FinanceSina | [FinanceSina](https://github.com/jiesutd/LatticeLSTM/tree/master) | 1579 | LOC、GPE、ORG和PER | 新浪财经爬取中文命名实体识别数据集 | |
| Resume | [Resume](https://github.com/jiesutd/LatticeLSTM/tree/master/ResumeNER) | 4761 | NAME、EDU、LOC、ORG、PRO、TITLE、CONT和RACE | 中国股市上市公司高管的简历 | |
| Bank | [Bank](https://www.heywhale.com/mw/dataset/617969ec768f3b0017862990/file) | 10000 | BANK、COMMENTS_ADJ、COMMENTS_N和PRODUCT | 银行借贷数据数据集 | |
| DLNER | [DLNER](https://github.com/lancopku/Chinese-Literature-NER-RE-Dataset/tree/master) | 28897 | Location、Thing、Abstract、Organization、Metric、Time、Physical、Person和Term | 语篇级命名实体识别数据集 | |
参考文档:
[提示工程指南](https://www.promptingguide.ai/zh)
<details>
<summary>参考的数据来源,展开查看</summary>
<pre><code>
[ttxy/cn_ner](https://huggingface.co/datasets/ttxy/cn_ner)
[xusenlin/clue-ner](https://huggingface.co/datasets/xusenlin/clue-ner)
[xusenlin/people-daily-ner](https://huggingface.co/datasets/xusenlin/people-daily-ner)
[peoples_daily_ner](https://huggingface.co/datasets/peoples_daily_ner)
[weibo_ner](https://huggingface.co/datasets/weibo_ner)
[Rosenberg/weibo_ner](https://huggingface.co/datasets/Rosenberg/weibo_ner)
[OneFly/NER](https://huggingface.co/datasets/OneFly/NER)
[djagatiya/ner-ontonotes-v5-eng-v4](https://huggingface.co/datasets/djagatiya/ner-ontonotes-v5-eng-v4)
[Adapting/chinese_biomedical_NER_dataset](https://huggingface.co/datasets/Adapting/chinese_biomedical_NER_dataset)
[nlhappy/CLUE-NER](https://huggingface.co/datasets/nlhappy/CLUE-NER)
[ttxy/resume_ner](https://huggingface.co/datasets/ttxy/resume_ner)
[doushabao4766/ccks_2019_ner_k_V3_wc](https://huggingface.co/datasets/doushabao4766/ccks_2019_ner_k_V3_wc)
</code></pre>
</details>
|
Cubpaw/voxelgym3D_5c_3x42x42x42_10 | 2023-09-13T15:02:08.000Z | [
"region:us"
] | Cubpaw | null | null | null | 0 | 80 | ---
dataset_info:
features:
- name: image
sequence:
sequence:
sequence:
sequence: uint8
- name: path_mask
sequence:
sequence:
sequence: float32
- name: path_rgb
sequence:
sequence:
sequence:
sequence: uint8
splits:
- name: train
num_bytes: 6331872
num_examples: 8
- name: validation
num_bytes: 1582968
num_examples: 2
download_size: 62687
dataset_size: 7914840
---
# Dataset Card for "voxelgym3D_5c_3x42x42x42_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Badal/TinyStories-GPT4 | 2023-09-13T19:11:46.000Z | [
"region:us"
] | Badal | null | null | null | 0 | 80 | Entry not found |
llama2d/llama2d-top-or-bottom | 2023-10-02T00:03:01.000Z | [
"region:us"
] | llama2d | null | null | null | 0 | 80 | ---
dataset_info:
features:
- name: input_ids
sequence: float32
- name: coords
sequence:
sequence: float32
- name: labels
sequence: float32
- name: attention_mask
sequence: float32
splits:
- name: train
num_bytes: 1208000
num_examples: 500
download_size: 8541
dataset_size: 1208000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "llama2d-top-or-bottom"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kye/metamath-mistal-tokenized-16384 | 2023-10-05T18:28:20.000Z | [
"region:us"
] | kye | null | null | null | 1 | 80 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 485833040
num_examples: 5930
download_size: 131269443
dataset_size: 485833040
---
# Dataset Card for "metamath-mistal-tokenized-16384"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
stevhliu/demo | 2022-10-24T18:02:42.000Z | [
"task_categories:summarization",
"task_categories:text2text-generation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"conditional-text-generation",
"region:us"
] | stevhliu | null | null | null | 0 | 79 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- apache-2.0
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
- text2text-generation
task_ids: []
tags:
- conditional-text-generation
---
# Dataset Card for Demo
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a demo dataset with two files `train.csv` and `test.csv`.
Load it by:
```python
from datasets import load_dataset
data_files = {"train": "train.csv", "test": "test.csv"}
demo = load_dataset("stevhliu/demo", data_files=data_files)
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. |
huggan/CelebA-HQ | 2022-04-12T14:10:49.000Z | [
"arxiv:1710.10196",
"region:us"
] | huggan | null | null | null | 7 | 79 | # Citation
```
@article{DBLP:journals/corr/abs-1710-10196,
author = {Tero Karras and
Timo Aila and
Samuli Laine and
Jaakko Lehtinen},
title = {Progressive Growing of GANs for Improved Quality, Stability, and Variation},
journal = {CoRR},
volume = {abs/1710.10196},
year = {2017},
url = {http://arxiv.org/abs/1710.10196},
eprinttype = {arXiv},
eprint = {1710.10196},
timestamp = {Mon, 13 Aug 2018 16:46:42 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1710-10196.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` |
juliensimon/amazon-shoe-reviews | 2023-10-09T13:22:34.000Z | [
"language:en",
"region:us"
] | juliensimon | null | null | null | 0 | 79 | ---
language: en
dataset_info:
features:
- name: labels
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 16847665.2
num_examples: 90000
- name: test
num_bytes: 1871962.8
num_examples: 10000
download_size: 0
dataset_size: 18719628.0
---
# Dataset Card for "amazon-shoe-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
merionum/ru_paraphraser | 2022-07-28T15:01:08.000Z | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:sentence-similarity",
"task_ids:semantic-similarity-scoring",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ru",
"license:mit",
"region:us"
] | merionum | null | null | null | 5 | 79 | ---
annotations_creators:
- crowdsourced
- expert-generated
- machine-generated
language_creators:
- crowdsourced
language:
- ru
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: null
pretty_name: ParaPhraser
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-classification
- text-generation
- text2text-generation
- sentence-similarity
task_ids:
- semantic-similarity-scoring
---
# Dataset Card for ParaPhraser
### Dataset Summary
ParaPhraser is a news headlines corpus annotated according to the following schema:
```
1: precise paraphrases
0: near paraphrases
-1: non-paraphrases
```
The _Plus_ part is also available.
It contains clusters of news headline paraphrases labeled automatically by a fine-tuned paraphrase detection BERT model.
In order to load it:
```python
from datasets import load_dataset
corpus = load_dataset('merionum/ru_paraphraser', data_files='plus.jsonl')
```
## Dataset Structure
```
train: 7,227 pairs
test: 1,924 pairs
plus: 1,725,393 clusters (total: ~7m texts)
```
### Citation Information
```
@inproceedings{pivovarova2017paraphraser,
title={ParaPhraser: Russian paraphrase corpus and shared task},
author={Pivovarova, Lidia and Pronoza, Ekaterina and Yagunova, Elena and Pronoza, Anton},
booktitle={Conference on artificial intelligence and natural language},
pages={211--225},
year={2017},
organization={Springer}
}
```
```
@inproceedings{gudkov-etal-2020-automatically,
title = "Automatically Ranked {R}ussian Paraphrase Corpus for Text Generation",
author = "Gudkov, Vadim and
Mitrofanova, Olga and
Filippskikh, Elizaveta",
booktitle = "Proceedings of the Fourth Workshop on Neural Generation and Translation",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.ngt-1.6",
doi = "10.18653/v1/2020.ngt-1.6",
pages = "54--59",
abstract = "The article is focused on automatic development and ranking of a large corpus for Russian paraphrase generation which proves to be the first corpus of such type in Russian computational linguistics. Existing manually annotated paraphrase datasets for Russian are limited to small-sized ParaPhraser corpus and ParaPlag which are suitable for a set of NLP tasks, such as paraphrase and plagiarism detection, sentence similarity and relatedness estimation, etc. Due to size restrictions, these datasets can hardly be applied in end-to-end text generation solutions. Meanwhile, paraphrase generation requires a large amount of training data. In our study we propose a solution to the problem: we collect, rank and evaluate a new publicly available headline paraphrase corpus (ParaPhraser Plus), and then perform text generation experiments with manual evaluation on automatically ranked corpora using the Universal Transformer architecture.",
}
```
### Contributions
Dataset maintainer:
Vadim Gudkov: [@merionum](https://github.com/merionum)
|
taln-ls2n/kpbiomed | 2022-12-01T10:52:09.000Z | [
"task_categories:text-generation",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2211.12124",
"region:us"
] | taln-ls2n | KPBiomed benchmark dataset for keyphrase extraction an generation. | \ | null | 3 | 79 | ---
annotations_creators:
- unknown
language_creators:
- unknown
language:
- en
license:
- cc-by-nc-4.0
multilinguality:
- monolingual
task_categories:
- text-mining
- text-generation
task_ids:
- keyphrase-generation
- keyphrase-extraction
size_categories:
- 100K<n<1M
pretty_name: KP-Biomed
---
# KPBiomed, A Large-Scale Dataset for keyphrase generation
## About
This dataset is made of 5.6 million abstracts with author assigned keyphrases.
Details about the dataset can be found in the original paper:
Maël Houbre, Florian Boudin and Béatrice Daille. 2022. [A Large-Scale Dataset for Biomedical Keyphrase Generation](https://arxiv.org/abs/2211.12124). In Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI 2022).
Reference (author-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in the following paper:
- Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](https://aclanthology.org/2021.naacl-main.330/).
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
Text pre-processing (tokenization) is carried out using spacy (en_core_web_sm model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in nltk) is applied before reference keyphrases are matched against the source text.
## Content
The details of the dataset are in the table below:
| Split | # documents | # keyphrases by document (average) | % Present | % Reordered | % Mixed | % Unseen |
| :----------- | ----------: | ---------------------------------: | --------: | ----------: | ------: | -------: |
| Train small | 500k | 5.24 | 66.31 | 7.16 | 12.60 | 13.93 |
| Train medium | 2M | 5.24 | 66.30 | 7.18 | 12.57 | 13.95 |
| Train large | 5.6M | 5.23 | 66.32 | 7.18 | 12.55 | 13.95 |
| Validation | 20k | 5.25 | 66.44 | 7.07 | 12.45 | 14.05 |
| Test | 20k | 5.22 | 66.59 | 7.22 | 12.44 | 13.75 |
The following data fields are available:
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **mesh terms**: list of indexer assigned MeSH terms if available (around 68% of the articles)
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
- **authors**: list of the article's authors
- **year**: publication year
**NB**: The present keyphrases (represented by the "P" label in the PRMU column) are sorted by their apparition order in the text (title + text).
|
Den4ikAI/russian_dialogues | 2023-03-12T07:58:54.000Z | [
"task_categories:conversational",
"size_categories:1M<n<10M",
"language:ru",
"license:mit",
"region:us"
] | Den4ikAI | null | null | null | 8 | 79 | ---
license: mit
task_categories:
- conversational
language:
- ru
size_categories:
- 1M<n<10M
---
Датасет русских диалогов собранных с Telegram чатов.
Диалоги имеют разметку по релевантности.
Также были сгенерированы негативные примеры с помощью перемешивания похожих ответов.
Количество диалогов - 2 миллиона
Формат датасета:
```
{
'question': 'Привет',
'answer': 'Привет, как дела?'
'relevance': 1
}
```
Программа парсинга: https://github.com/Den4ikAI/telegram_chat_parser
### Citation:
```
@MISC{russian_instructions,
author = {Denis Petrov},
title = {Russian dialogues dataset for conversational agents},
url = {https://huggingface.co/datasets/Den4ikAI/russian_dialogues},
year = 2023
}
``` |
ipipan/maupqa | 2023-09-18T07:28:41.000Z | [
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_ids:open-domain-qa",
"task_ids:document-retrieval",
"annotations_creators:found",
"annotations_creators:machine-generated",
"size_categories:1M<n<10M",
"language:pl",
"license:cc-by-sa-4.0",
"arxiv:2305.05486",
"arxiv:2309.08469",
"region:us"
] | ipipan | MAUPQA is a collection of datasets for Polish Open-domain Question Answering. | @inproceedings{rybak-2023-maupqa,
title = "{MAUPQA}: Massive Automatically-created {P}olish Question Answering Dataset",
author = "Rybak, Piotr",
booktitle = "Proceedings of the 9th Workshop on Slavic Natural Language Processing 2023 (SlavicNLP 2023)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bsnlp-1.2",
pages = "11--16",
abstract = "Recently, open-domain question answering systems have begun to rely heavily on annotated datasets to train neural passage retrievers. However, manually annotating such datasets is both difficult and time-consuming, which limits their availability for less popular languages. In this work, we experiment with several methods for automatically collecting weakly labeled datasets and show how they affect the performance of the neural passage retrieval models. As a result of our work, we publish the MAUPQA dataset, consisting of nearly 400,000 question-passage pairs for Polish, as well as the HerBERT-QA neural retriever.",
} | null | 2 | 79 | ---
task_categories:
- question-answering
- text-retrieval
task_ids:
- open-domain-qa
- document-retrieval
language:
- pl
pretty_name: MAUPQA
size_categories:
- 1M<n<10M
annotations_creators:
- found
- machine-generated
license: cc-by-sa-4.0
---
# Dataset Card for MAUPQA Dataset
## Dataset Description
- **Paper:** [MAUPQA: Massive Automatically-created Polish Question Answering Dataset](https://arxiv.org/abs/2305.05486), [SilverRetriever: Advancing Neural Passage Retrieval for Polish Question Answering](https://arxiv.org/abs/2309.08469)
- **Point of Contact:** [Piotr Rybak](mailto:piotr.cezary.rybak@gmail.com)
### Dataset Summary
MAUPQA is a collection of 14 datasets for Polish document retrieval. Most of the datasets are either machine-generated or machine-translated from English. Across all datasets, it consists of over 1M questions, 1M positive, and 7M hard-negative question-passage pairs.
### Supported Tasks and Leaderboards
- `document-retrieval`: The dataset can be used to train a model for document retrieval. Success on this task is typically measured by [top-k retrieval accuracy](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.top_k_accuracy_score.html) or [NDCG](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.ndcg_score.html).
### Languages
The text is in Polish, as spoken by the [Internet users](https://github.com/facebookresearch/cc_net), [Polish Wikipedia](https://pl.wikipedia.org/) editors, or is an output of generative or translation models. The BCP-47 code for Polish is pl-PL.
## Dataset Structure
### Data Instances
The dataset consists of over 8 million question-passage pairs. For each instance, there is a `question`, a passage (`passage_title`, `passage_text`), and a boolean indicator if the passage is `relevant` for the given question (i.e. does it contain the answers).
For a small subset of `question` there is also a list of possible `answers` formulated in a natural language, in a way a Polish
speaker would answer the questions.
```
{
'question_id': 1,
'question': 'Na którym kontynencie leży państwo Gujana, panie Krzysztofie?',
'answers': "['W Ameryce Południowej']",
'passage_title': 'Gujana (ujednoznacznienie)',
'passage_text': 'Gujana (region) – region Ameryki Południowej Gujana – państwo w Ameryce Południowej Gujana Brytyjska – dawna kolonia brytyjska; obecnie państwo Gujana Gujana Francuska – departament zamorski Francji; dawniej kolonia francuska Gujana Holenderska – dawna kolonia holenderska; obecnie państwo Surinam',
'relevant': True,
'passage_source': 'crawling',
'subset': '1z10'
}
```
### Data Fields
Question-passage pairs:
- `question_id`: an integer id of the question
- `question`: a string containing the question
- `passage_title`: a string containing the title of the Wikipedia article
- `passage_text`: a string containing the passage text as extracted by the human annotator
- `relevant`: a boolean flag representing whether a passage is relevant to the question (i.e. does it contain the answers)
- `annotated_by`: a string containing the name of the annotator who verified the relevance of the pair
- `answers`: a string containing a list of possible short answers to the question
- `passage_source`: a string containing the method of obtaining the passage. One of the following:
- `manual-annotation`: the question-passage pair was manually annotated
- `crawling`: the question-passage pairs were created by taking advantage of the specific structure of crawled website
- `dataset-translation`: the dataset was created by machine-translating the English dataset
- `generative-model`: the question was created by the generative model based on the given passage
- `bm25-negatives`: the passage was found by the BM25 retriever and scored using a multilingual cross-encoder to ensure it is not relevant
- `bm25-positives`: the passage was found by the BM25 retriever and scored using a multilingual cross-encoder to ensure it is relevant
- `subset`: a string containing the name of the dataset
### Data Splits
MAUPQA is a collection of 14 datasets and most of them are weakly labeled. Therefore, the intended use of MAUPQA is for training only. As such, all examples belong to a single `train` split. We recommend using the [PolQA](https://huggingface.co/datasets/ipipan/polqa) dataset for evaluation.
Basic statistics of all 14 datasets:
| dataset | # questions | # answers | # positive passages | # negative passages |
|-------------------|------------:|----------:|--------------------:|--------------------:|
| 1z10 | 22,835 | 21,415 | 22,014 | 139,471 |
| czy-wiesz-v2 | 29,078 | - | 29,078 | 143,306 |
| gpt3-cc | 10,146 | 10,146 | 10,177 | 89,203 |
| gpt3.5-cc | 29,591 | 29,583 | 29,720 | 251,959 |
| gpt3.5-wiki | 29,674 | 29,636 | 29,748 | 115,564 |
| mkqa | 4,036 | 4,036 | 3,968 | 19,814 |
| mqa | 172,768 | - | 178,131 | 1,249,659 |
| msmarco | 389,987 | - | 416,763 | 3,006,996 |
| multilingual-NLI | 100,752 | 64,900 | 68,096 | 743,857 |
| nq | 135,781 | - | 139,976 | 797,436 |
| poleval2021-pairs | 1,977 | - | 2,088 | 17,608 |
| poquad | 56,588 | 46,157 | 46,187 | 299,865 |
| templates | 15,993 | 14,504 | 15,993 | 45,228 |
| wiki-def | 18,093 | 18,092 | 18,093 | 84,956 |
| Total | 1,017,299 | 238,469 | 1,010,032 | 7,004,922 |
## Dataset Creation
### Curation Rationale
Open-domain question answering systems rely heavily on annotated datasets to train neural document retrievers. However, manually annotating such datasets is both difficult and time-consuming. To overcome these difficulties, we experimented with several methods for automatically collecting weakly labeled datasets. As a result, MAUPQA enables the development of robust document retrieval systems for Polish.
### Source Data
#### Initial Data Collection and Normalization
Below, we briefly describe each dataset. For a detailed description please refer to the [paper](https://arxiv.org/abs/2305.05486).
* `1z10`: We transcribe 333 recordings of the [Jeden z Dziesięciu](https://pl.wikipedia.org/wiki/Jeden_z_dziesi%C4%99ciu) TV show using the Whisper model and extract the question-answer pairs using GPT-3.5 model. We use the BM25 retriever and the GPT-3.5-based cross-encoder to match questions with Wikipedia passages.
* `czy-wiesz-v2`: We first crawl all questions from the [Did you know?](https://pl.wikipedia.org/wiki/Wikiprojekt:Czy_wiesz/archiwum) section on Polish Wikipedia together with a link to the relevant Wikipedia article. Then, we use the [multilingual cross-encoder](https://huggingface.co/unicamp-dl/mMiniLM-L6-v2-mmarco-v2) to choose the most relevant passage.
* `gpt3-cc`: We sample random passages from [CCNet](https://github.com/facebookresearch/cc_net) corpus and prompt GPT-3 to generate a relevant question.
* `gpt3.5-cc`: We sample random passages from [CCNet](https://github.com/facebookresearch/cc_net) corpus and prompt GPT-3.5 to generate a relevant question.
* `gpt3.5-wiki`: We sample random passages from Polish Wikipedia and prompt GPT-3.5 to generate a relevant question.
* `mkqa`: We clean the Polish subset of the [MKQA](https://huggingface.co/datasets/mkqa) dataset by removing questions without answers, requiring long answers (*Why?* and *How?* questions), and ambiguous ones ("Who is the *current* president?*). We use the BM25 retriever and the [multilingual cross-encoder](https://huggingface.co/unicamp-dl/mMiniLM-L6-v2-mmarco-v2) to choose the most relevant passage.
* `mqa`: We clean the Polish subset of the [MQA](https://huggingface.co/datasets/clips/mqa) dataset by removing artificially created questions like "What is the best hotel in *{city}*?" for hundreds of different *cities*. To clean the dataset, we cluster lexically similar questions/passages and remove clusters with over 5 questions.
* `msmarco`: We translate the [MS MARCO](https://huggingface.co/datasets/ms_marco) dataset into Polish using the machine translation model.
* `multilingual-NLI`: We extract question-answer pairs from the Polish subset of the [multilingual-NLI](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7) dataset. We create questions using the following template: "Czy *{premise}*?" (Eng. "Does *{premise}*?") and use hypotheses as passages. We consider `entailment` and `contradiction` labels as relevant and `neutral` as negative.
* `nq`: We translate the [NQ](https://huggingface.co/datasets/natural_questions) dataset into Polish using the machine translation model.
* `poleval2021-pairs`: We take [allegro/polish-question-passage-pairs](https://huggingface.co/datasets/allegro/polish-question-passage-pairs) without any changes.
* `poquad`: We extract question-passages pairs from the training split of the [PoQuAD](https://huggingface.co/datasets/clarin-pl/poquad) dataset.
* `templates`: We take advantage of the Wikipedia structure to generate questions using predefined templates. For example, list pages group together similar entities (e.g. "Writers born in Poland") which allow generating questions like "Where was *{writer name}* born?". In total, we use 33 templates to generate questions. We use the [multilingual cross-encoder](https://huggingface.co/unicamp-dl/mMiniLM-L6-v2-mmarco-v2) to choose the most relevant passage from the linked article.
* `wiki-def`: We use [Wiktionary](https://www.wiktionary.org/) to generate questions based on word definitions. We use definitions that have links to Wikipedia articles to create the question-passage pairs. For example, the definition of "Monday" is "the first day of the week". Based on it, we generate the question "What is the name of *the first day of the week*?".
Additionally, we extend each dataset by sampling the hard negative passages using a BM25 retriever and score using a [multilingual cross-encoder](https://huggingface.co/unicamp-dl/mMiniLM-L6-v2-mmarco-v2) to ensure that passages are not relevant.
#### Who are the source language producers?
The text is in Polish, as spoken by the [Internet users](https://github.com/facebookresearch/cc_net), [Polish Wikipedia](https://pl.wikipedia.org/) editors, or is an output of generative or translation models.
### Annotations
#### Annotation process
The MAUPQA dataset doesn't provide additional annotation except for the annotation present in the source datasets.
#### Who are the annotators?
Please refer to the description of the source datasets.
### Personal and Sensitive Information
The dataset should not contain any personal or sensitive information. However, we use the [CCNet](https://github.com/facebookresearch/cc_net) dataset as a source of passages that we didn't manually inspect for personal and sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was created to promote the research in the open-domain question answering for Polish and allow developing question answering systems.
### Discussion of Biases
The machine-translated datasets might not represent the natural language as used by native Polish speakers. Similarly, the questions generated by the generative models might not be representative or correct.
Most of the question-passage pairs are created automatically using the BM25 retriever and as such it is biased to lexically similar pairs.
### Other Known Limitations
The MAUPQA dataset is mostly automatically generated and can therefore contain a high proportion of noise and incorrectly labeled question-passage pairs.
## Additional Information
### Dataset Curators
The MAUPQA dataset was collected by Piotr Rybak and Maciej Ogrodniczuk from the [Institute of Computer Science, Polish Academy of Sciences](http://zil.ipipan.waw.pl/) but the source datasets were created by many more researchers. Please refer to the original dataset descriptions for the full authorship.
This work was supported by the European Regional Development Fund as a part of 2014–2020 Smart Growth Operational Programme, CLARIN — Common Language Resources and Technology Infrastructure, project no. POIR.04.02.00-00C002/19.
### Licensing Information
CC BY-SA 4.0
### Citation Information
```
@inproceedings{rybak-2023-maupqa,
title = "{MAUPQA}: Massive Automatically-created {P}olish Question Answering Dataset",
author = "Rybak, Piotr",
booktitle = "Proceedings of the 9th Workshop on Slavic Natural Language Processing 2023 (SlavicNLP 2023)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bsnlp-1.2",
pages = "11--16",
abstract = "Recently, open-domain question answering systems have begun to rely heavily on annotated datasets to train neural passage retrievers. However, manually annotating such datasets is both difficult and time-consuming, which limits their availability for less popular languages. In this work, we experiment with several methods for automatically collecting weakly labeled datasets and show how they affect the performance of the neural passage retrieval models. As a result of our work, we publish the MAUPQA dataset, consisting of nearly 400,000 question-passage pairs for Polish, as well as the HerBERT-QA neural retriever.",
}
```
```
@misc{rybak2023silverretriever,
title={SilverRetriever: Advancing Neural Passage Retrieval for Polish Question Answering},
author={Piotr Rybak and Maciej Ogrodniczuk},
year={2023},
eprint={2309.08469},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
mstz/lrs | 2023-04-21T23:10:35.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1k",
"language:en",
"license:cc",
"lrs",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_lrs_53,
author = {Fisher,R. A. & Fisher,R.A.},
title = {{Lrs}},
year = {1988},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C56C76}}
} | null | 0 | 79 | ---
language:
- en
tags:
- lrs
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Lrs
size_categories:
- n<1k
task_categories:
- tabular-classification
configs:
- lrs
- lrs_0
- lrs_1
- lrs_2
- lrs_3
- lrs_4
- lrs_5
- lrs_6
- lrs_7
- lrs_8
license: cc
---
# Lrs
The [Lrs dataset](https://archive-beta.ics.uci.edu/dataset/93/low+resolution+spectrometer) from the [UCI repository](https://archive-beta.ics.uci.edu).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------------|
| lrs | Multiclass classification | Classify lrs type. |
| lrs_0 | Binary classification | Is this instance of class 0? |
| lrs_1 | Binary classification | Is this instance of class 1? |
| lrs_2 | Binary classification | Is this instance of class 2? |
| lrs_3 | Binary classification | Is this instance of class 3? |
| lrs_4 | Binary classification | Is this instance of class 4? |
| lrs_5 | Binary classification | Is this instance of class 5? |
| lrs_6 | Binary classification | Is this instance of class 6? |
| lrs_7 | Binary classification | Is this instance of class 7? |
| lrs_8 | Binary classification | Is this instance of class 8? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/lrs", "lrs")["train"]
``` |
LinkSoul/instruction_merge_set | 2023-06-01T03:19:51.000Z | [
"region:us"
] | LinkSoul | null | null | null | 105 | 79 | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 13444870155
num_examples: 10077297
download_size: 3542585235
dataset_size: 13444870155
---
# Dataset Card for "instruction_merge_set"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
grantprice/CriticalRoleTranscripts | 2023-06-14T18:56:45.000Z | [
"region:us"
] | grantprice | null | null | null | 0 | 79 | Entry not found |
baber/logiqa2 | 2023-08-01T00:52:03.000Z | [
"task_categories:multiple-choice",
"language:en",
"language:zh",
"license:cc-by-sa-4.0",
"arxiv:2304.03439",
"region:us"
] | baber | The dataset is an amendment and re-annotation of LogiQA in 2020, a large-scale logical reasoning reading comprehension dataset adapted from the Chinese Civil Service Examination. We increase the data size, refine the texts with manual translation by professionals, and improve the quality by removing items with distinctive cultural features like Chinese idioms. Furthermore, we conduct a fine-grained annotation on the dataset and turn it into a two-way natural language inference (NLI) task, resulting in 35k premise-hypothesis pairs with gold labels, making it the first large-scale NLI dataset for complex logical reasoning | @ARTICLE{10174688,
author={Liu, Hanmeng and Liu, Jian and Cui, Leyang and Teng, Zhiyang and Duan, Nan and Zhou, Ming and Zhang, Yue},
journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},
title={LogiQA 2.0 — An Improved Dataset for Logical Reasoning in Natural Language Understanding},
year={2023},
volume={},
number={},
pages={1-16},
doi={10.1109/TASLP.2023.3293046}} | null | 3 | 79 | ---
license: cc-by-sa-4.0
task_categories:
- multiple-choice
language:
- en
- zh
pretty_name: LogiQA2.0
data_splits:
- train
- validation
- test
---
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** https://github.com/csitfun/LogiQA2.0, https://github.com/csitfun/LogiEval
- **Repository:** https://github.com/csitfun/LogiQA2.0, https://github.com/csitfun/LogiEval
- **Paper:** https://ieeexplore.ieee.org/abstract/document/10174688
### Dataset Summary
Logiqa2.0 dataset - logical reasoning in MRC and NLI tasks
LogiEval: a benchmark suite for testing logical reasoning abilities of instruct-prompt large language models
### Licensing Information
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
### Citation Information
@ARTICLE{10174688,
author={Liu, Hanmeng and Liu, Jian and Cui, Leyang and Teng, Zhiyang and Duan, Nan and Zhou, Ming and Zhang, Yue},
journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},
title={LogiQA 2.0 — An Improved Dataset for Logical Reasoning in Natural Language Understanding},
year={2023},
volume={},
number={},
pages={1-16},
doi={10.1109/TASLP.2023.3293046}}
@misc{liu2023evaluating,
title={Evaluating the Logical Reasoning Ability of ChatGPT and GPT-4},
author={Hanmeng Liu and Ruoxi Ning and Zhiyang Teng and Jian Liu and Qiji Zhou and Yue Zhang},
year={2023},
eprint={2304.03439},
archivePrefix={arXiv},
primaryClass={cs.CL}
} |
kaxap/llama2-sql-instruct | 2023-08-04T23:06:50.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | kaxap | null | null | null | 0 | 79 | ---
license: cc-by-nc-4.0
---
|
openaccess-ai-collective/oo-gpt4-filtered | 2023-08-05T04:00:44.000Z | [
"region:us"
] | openaccess-ai-collective | null | null | null | 2 | 79 | Entry not found |
mnazari/nena_speech_1_0_test | 2023-09-23T01:00:28.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"task_categories:translation",
"annotations_creators:crowdsourced",
"annotations_creators:Geoffrey Khan",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"language:aii",
"language:cld",
"language:huy",
"language:lsd",
"language:trg",
"language:aij",
"language:bhn",
"language:hrt",
"language:kqd",
"language:syn",
"license:cc0-1.0",
"region:us"
] | mnazari | null | null | null | 0 | 79 | ---
pretty_name: NENA Speech Dataset 1.0 (test)
annotations_creators:
- crowdsourced
- Geoffrey Khan
language_creators:
- crowdsourced
language:
- aii
- cld
- huy
- lsd
- trg
- aij
- bhn
- hrt
- kqd
- syn
license:
- cc0-1.0
multilinguality:
- multilingual
task_categories:
- automatic-speech-recognition
- text-to-speech
- translation
size_categories:
- 10K<n<100K
- 1K<n<10K
- n<1K
---
# Dataset Card for NENA Speech Dataset 1.0 (test)
## Table of Contents
- [Dataset Summary](#dataset-summary)
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [How to Use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
<!-- - [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations) -->
- [Building the Dataset](#building-the-dataset)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
<!-- - [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations) -->
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## ⚠️ This is a temperary repository that will be replaced by end of 2023
## Dataset Summary
NENA Speech is a multimodal dataset to help teach machines how real people speak the Northeastern Neo-Aramaic (NENA) dialects.
The NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.
NENA Speech consists of multimodal examples of speech of the NENA dialects. While all documented NENA dialects are included, not all have data yet, and some will never due to recent loss of their final speakers.
## Dataset Description
- **Homepage**: https://crowdsource.nenadb.dev/
- **Point of Contact:** [Matthew Nazari](mailto:matthewnazari@college.harvard.edu)
## Languages
The NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.
Speakers of the Christian dialects call their language Assyrian and Chaldean in English. In their language these speakers use multiple different terms (e.g. suráy, sureth, ḥadiṯan, senaya). Speakers of the Jewish dialects call their language lišana deni, lišanət noshan, lišana nosha, lišana didan, all meaning "our language". Some names reflect the consciousness of it being a specifically Jewish language (e.g. lišan hozaye, hulaula).
NENA Speech has a subset for all of the over 150 NENA dialects. Not all dialects have examples available yet. Some dialects will never have examples available due to the loss of their final speakers in recent years.
## How to Use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, simply specify the corresponding language config name (e.g., "urmi (christian)" for the dialect of the Assyrian Christians of Urmi):
```python
from datasets import load_dataset
nena_speech = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
## Dataset Structure
### Data Instances
The NENA Speech dataset is a multimodal dataset that consists of three different kinds of examples:
1. **Unlabeled speech examples:** these contain audio of speech (`audio`) but no accompanying transcription (`transcription`) or translation (`translation`). This is useful for representation learning.
2. **Transcribed speech examples:** these contain both audio and transcription of speech. These are useful for machine learning tasks like automatic speech recognition and speech synthesis.
3. **Transcribed and translated speech examples:** these kinds of examples contain audio, transcription, and translation of speech. These are useful for tasks like multimodal translation.
Make sure to filter for the kinds of examples you need for your task before before using it.
```json
{
"transcription": "gu-mdìta.ˈ",
"translation": "in the town.",
"audio": {
"path": "et/train/nena_speech_0uk14ofpom196aj.mp3",
"array": array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
"sampling_rate": 48000
},
"locale": "IRN",
"proficiency": "proficient as mom",
"age": "70's",
"crowdsourced": true,
"unlabeled": true,
"interrupted": true,
"client_id": "gwurt1g1ln" ,
"path": "et/train/nena_speech_0uk14ofpom196aj.mp3",
}
```
### Data Fields
- `transcription (string)`: The transcription of what was spoken (e.g. `"beta"`)
- `translation (string)`: The translation of what was spoken in English (e.g. `"house"`)
- `audio (dict)`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`.
- `locale (string)`: The locale of the speaker
- `proficiency (string)`: The proficiency of the speaker
- `age (string)`: The age of the speaker (e.g. `"20's"`, `"50's"`, `"100+"`)
- `crowdsourced (bool)`: Indicates whether the example was crowdsourced as opposed to collected from existing language documentation resources
- `interrupted (bool)`: Indicates whether the example was interrupted with the speaker making sound effects or switching into another language
- `client_id (string)`: An id for which client (voice) made the recording
- `path (string)`: The path to the audio file
### Data Splits
The examples have been subdivided into three portions:
1. **dev:** the validation split (10%)
3. **test:** the test split (10%)
2. **train:** the train split (80%)
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Dataset Creation
<!-- ### Curation Rationale
[Needs More Information]
### Source Data
#### Language Documentation Resources
[Needs More Information]
#### Webscraping Facebook
[Needs More Information]
#### Crowdsourcing
[Needs More Information]
### Annotations
[Needs More Information] -->
### Building the Dataset
The NENA Speech dataset itself is built using `build.py`.
First, install the necessary requirements.
```
pip install -r requirements.txt
```
Next, build the dataset.
```
python build.py --build
```
Finally, push to the HuggingFace dataset repository.
## Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Data Preprocessing
The dataset consists of three different kinds of examples (see [Data Instances](#data-instances)).
Make sure to filter for the kinds of examples you need for your task before before using it. For example, for automatic speech recognition you will want to filter for examples with transcriptions.
In most tasks, you will want to filter out examples that are interrupted (e.g. by the speaker making sound effects, by the speaker switching into a another language).
```python
from datasets import load_dataset
ds = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")
def filter_for_asr(example):
return example['transcription'] and not example['interrupted']
ds = ds.filter(filter_for_asr, desc="filter dataset")
```
Transcriptions include markers of linguistic and acoustic features which may be removed in certain tasks (e.g. word stress, nuclear stress, intonation group markers, vowel length).
```python
from datasets import load_dataset
ds = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")
def prepare_dataset(batch):
chars_to_remove = ['ˈ', '̀', '́', '̄', '̆', '.', ',', '?', '!']
for char in chars_to_remove:
batch["transcription"] = batch["transcription"].replace(char, "")
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
<!-- ## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information] -->
## Additional Information
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/).
### Citation Information
This work has not yet been published.
|
europa_ecdc_tm | 2022-11-03T16:31:26.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:ga",
"language:hu",
"language:is",
"language:it",
"language:lt",
"language:lv",
"language:mt",
"language:nl",
"language:no",
"language:pl",
"language:pt",
"language:ro",
"language:sk",
"language:sl",
"language:sv",
"license:cc-by-sa-4.0",
"region:us"
] | null | In October 2012, the European Union (EU) agency 'European Centre for Disease Prevention and Control' (ECDC) released a translation memory (TM), i.e. a collection of sentences and their professionally produced translations, in twenty-five languages. This resource bears the name EAC Translation Memory, short EAC-TM.
ECDC-TM covers 25 languages: the 23 official languages of the EU plus Norwegian (Norsk) and Icelandic. ECDC-TM was created by translating from English into the following 24 languages: Bulgarian, Czech, Danish, Dutch, English, Estonian, Gaelige (Irish), German, Greek, Finnish, French, Hungarian, Icelandic, Italian, Latvian, Lithuanian, Maltese, Norwegian (NOrsk), Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish and Swedish.
All documents and sentences were thus originally written in English. They were then translated into the other languages by professional translators from the Translation Centre CdT in Luxembourg. | @Article{Steinberger2014,
author={Steinberger, Ralf
and Ebrahim, Mohamed
and Poulis, Alexandros
and Carrasco-Benitez, Manuel
and Schl{\"u}ter, Patrick
and Przybyszewski, Marek
and Gilbro, Signe},
title={An overview of the European Union's highly multilingual parallel corpora},
journal={Language Resources and Evaluation},
year={2014},
month={Dec},
day={01},
volume={48},
number={4},
pages={679-707},
issn={1574-0218},
doi={10.1007/s10579-014-9277-0},
url={https://doi.org/10.1007/s10579-014-9277-0}
} | null | 1 | 78 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- ga
- hu
- is
- it
- lt
- lv
- mt
- nl
- 'no'
- pl
- pt
- ro
- sk
- sl
- sv
license:
- cc-by-sa-4.0
multilinguality:
- translation
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: EuropaEcdcTm
dataset_info:
- config_name: en2bg
features:
- name: translation
dtype:
translation:
languages:
- en
- bg
splits:
- name: train
num_bytes: 798444
num_examples: 2567
download_size: 4286636
dataset_size: 798444
- config_name: en2cs
features:
- name: translation
dtype:
translation:
languages:
- en
- cs
splits:
- name: train
num_bytes: 585423
num_examples: 2562
download_size: 4286636
dataset_size: 585423
- config_name: en2da
features:
- name: translation
dtype:
translation:
languages:
- en
- da
splits:
- name: train
num_bytes: 545106
num_examples: 2577
download_size: 4286636
dataset_size: 545106
- config_name: en2de
features:
- name: translation
dtype:
translation:
languages:
- en
- de
splits:
- name: train
num_bytes: 588974
num_examples: 2560
download_size: 4286636
dataset_size: 588974
- config_name: en2el
features:
- name: translation
dtype:
translation:
languages:
- en
- el
splits:
- name: train
num_bytes: 849151
num_examples: 2530
download_size: 4286636
dataset_size: 849151
- config_name: en2es
features:
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 582798
num_examples: 2564
download_size: 4286636
dataset_size: 582798
- config_name: en2et
features:
- name: translation
dtype:
translation:
languages:
- en
- et
splits:
- name: train
num_bytes: 543554
num_examples: 2581
download_size: 4286636
dataset_size: 543554
- config_name: en2fi
features:
- name: translation
dtype:
translation:
languages:
- en
- fi
splits:
- name: train
num_bytes: 573069
num_examples: 2617
download_size: 4286636
dataset_size: 573069
- config_name: en2fr
features:
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 595489
num_examples: 2561
download_size: 4286636
dataset_size: 595489
- config_name: en2ga
features:
- name: translation
dtype:
translation:
languages:
- en
- ga
splits:
- name: train
num_bytes: 286362
num_examples: 1356
download_size: 4286636
dataset_size: 286362
- config_name: en2hu
features:
- name: translation
dtype:
translation:
languages:
- en
- hu
splits:
- name: train
num_bytes: 600536
num_examples: 2571
download_size: 4286636
dataset_size: 600536
- config_name: en2is
features:
- name: translation
dtype:
translation:
languages:
- en
- is
splits:
- name: train
num_bytes: 557055
num_examples: 2511
download_size: 4286636
dataset_size: 557055
- config_name: en2it
features:
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: train
num_bytes: 576797
num_examples: 2534
download_size: 4286636
dataset_size: 576797
- config_name: en2lt
features:
- name: translation
dtype:
translation:
languages:
- en
- lt
splits:
- name: train
num_bytes: 645429
num_examples: 2545
download_size: 4286636
dataset_size: 645429
- config_name: en2lv
features:
- name: translation
dtype:
translation:
languages:
- en
- lv
splits:
- name: train
num_bytes: 576217
num_examples: 2542
download_size: 4286636
dataset_size: 576217
- config_name: en2mt
features:
- name: translation
dtype:
translation:
languages:
- en
- mt
splits:
- name: train
num_bytes: 608263
num_examples: 2539
download_size: 4286636
dataset_size: 608263
- config_name: en2nl
features:
- name: translation
dtype:
translation:
languages:
- en
- nl
splits:
- name: train
num_bytes: 569643
num_examples: 2510
download_size: 4286636
dataset_size: 569643
- config_name: en2no
features:
- name: translation
dtype:
translation:
languages:
- en
- 'no'
splits:
- name: train
num_bytes: 536725
num_examples: 2537
download_size: 4286636
dataset_size: 536725
- config_name: en2pl
features:
- name: translation
dtype:
translation:
languages:
- en
- pl
splits:
- name: train
num_bytes: 644402
num_examples: 2546
download_size: 4286636
dataset_size: 644402
- config_name: en2pt
features:
- name: translation
dtype:
translation:
languages:
- en
- pt
splits:
- name: train
num_bytes: 583638
num_examples: 2531
download_size: 4286636
dataset_size: 583638
- config_name: en2ro
features:
- name: translation
dtype:
translation:
languages:
- en
- ro
splits:
- name: train
num_bytes: 585159
num_examples: 2555
download_size: 4286636
dataset_size: 585159
- config_name: en2sk
features:
- name: translation
dtype:
translation:
languages:
- en
- sk
splits:
- name: train
num_bytes: 627797
num_examples: 2525
download_size: 4286636
dataset_size: 627797
- config_name: en2sl
features:
- name: translation
dtype:
translation:
languages:
- en
- sl
splits:
- name: train
num_bytes: 594027
num_examples: 2545
download_size: 4286636
dataset_size: 594027
- config_name: en2sv
features:
- name: translation
dtype:
translation:
languages:
- en
- sv
splits:
- name: train
num_bytes: 546349
num_examples: 2527
download_size: 4286636
dataset_size: 546349
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://ec.europa.eu/jrc/en/language-technologies/ecdc-translation-memory](https://ec.europa.eu/jrc/en/language-technologies/ecdc-translation-memory)
- **Paper:** [https://link.springer.com/article/10.1007/s10579-014-9277-0](https://link.springer.com/article/10.1007/s10579-014-9277-0)
- **Point of Contact:** [Ralf Steinberger](mailto:Ralf.Steinberger@jrc.ec.europa.eu)
### Dataset Summary
In October 2012, the European Union (EU) agency 'European Centre for Disease Prevention and Control' (ECDC) released a translation memory (TM), i.e. a collection of sentences and their professionally produced translations, in twenty-five languages.
ECDC-TM covers 25 languages: the 23 official languages of the EU plus Norwegian (Norsk) and Icelandic. ECDC-TM was created by translating from English into the following 24 languages: Bulgarian, Czech, Danish, Dutch, English, Estonian, Gaelige (Irish), German, Greek, Finnish, French, Hungarian, Icelandic, Italian, Latvian, Lithuanian, Maltese, Norwegian (NOrsk), Polish, Portuguese, Romanian, Slovak, Slovenian, Spanish and Swedish.
All documents and sentences were originally written in English. They were then translated into the other languages by professional translators from the Translation Centre CdT in Luxembourg.
To load a language pair that is not part of the config, just specify the language code as language pair. For example, if you want to translate Czech to Greek:
`dataset = load_dataset("europa_ecdc_tm", language_pair=("cs", "el"))`
### Supported Tasks and Leaderboards
- `text2text-generation`: the dataset can be used to train a model for `machine-translation`. Machine translation models are usually evaluated using metrics such as [BLEU](https://huggingface.co/metrics/bleu), [ROUGE](https://huggingface.co/metrics/rouge) or [SacreBLEU](https://huggingface.co/metrics/sacrebleu). You can use the [mBART](https://huggingface.co/facebook/mbart-large-cc25) model for this task. This task has active leaderboards which can be found at [https://paperswithcode.com/task/machine-translation](https://paperswithcode.com/task/machine-translation), which usually rank models based on [BLEU score](https://huggingface.co/metrics/bleu).
### Languages
All documents and sentences were originally written in English (`en`). They were then translated into the other languages by professional translators from the Translation Centre CdT in Luxembourg.
Translations are available in these languages: `en`, `bg`, `cs`, `da`, `de`, `el`, `en`, `es`, `et`, `fi`, `fr`, `ga`, `hu`, `is`, `it`, `lt`, `lv`, `mt`, `nl`, `no`, `pl`, `pt`, `ro`, `sk`, `sl`, `sv`.
## Dataset Structure
### Data Instances
```
{
"translation": {
"<source_language>":"Sentence to translate",
"<target_language>": "Translated sentence",
},
}
```
### Data Fields
- `translation`: a multilingual `string` variable, with possible languages including `en`, `bg`, `cs`, `da`, `de`, `el`, `en`, `es`, `et`, `fi`, `fr`, `ga`, `hu`, `is`, `it`, `lt`, `lv`, `mt`, `nl`, `no`, `pl`, `pt`, `ro`, `sk`, `sl`, `sv`.
### Data Splits
The data is not splitted (only the `train` split is available).
## Dataset Creation
### Curation Rationale
The ECDC-TM is relatively small compared to the JRC-Acquis and to DGT-TM, but it has the advantage that it focuses on a very different domain, namely that of public health. Also, it includes translation units for the languages Irish (Gaelige, GA), Norwegian (Norsk, NO) and Icelandic (IS).
### Source Data
#### Initial Data Collection and Normalization
ECDC-TM was built on the basis of the website of the European Centre for Disease Prevention and Control (ECDC). The major part of the documents talks about health-related topics (anthrax, botulism, cholera, dengue fever, hepatitis, etc.), but some of the web pages also describe the organisation ECDC (e.g. its organisation, job opportunities) and its activities (e.g. epidemic intelligence, surveillance).
#### Who are the source language producers?
All documents and sentences were originally written in English, by the ECDC website content producers.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
All documents and sentences were thus originally written in English. They were then translated into the other languages by professional translators from the Translation Centre CdT in Luxembourg.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
Contains translations of sentences in the public healthcare domain, including technical terms (disease and treatment names for example).
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Copyright © EU / ECDC, 2020
#### Copyright
The Work (as defined below) is provided under the terms of this Licence (or later versions of
this Licence published by the European Commission). The work is protected by copyright
and/or other applicable law. Any use of the work other than as authorised under this
Licence or copyright law is prohibited.
The terms provided herein conform to the reuse policy established by the Commission's
Reuse Decision (2011/833/EU).
By exercising any rights to the work provided here, you accept and agree to be bound by the
terms of this Licence. The Owner (as defined below) grants You the rights conferred by this
Licence in consideration of your acceptance of such terms and conditions.
#### Definitions
The ‘Owner’ shall mean jointly the European Union represented by the European
Commission and the European Centre for Disease Prevention and Control, which are the
original licensors and/or control the copyright and any other intellectual and industrial
property rights related to the Work.
The ‘Work’ is the information and/or data offered to You under this Licence, according to
the ‘Copyright Notice’:
Copyright (c) EU/ECDC, <YEAR>
‘You’ means the natural or legal person, or body of persons corporate or incorporate,
acquiring rights under this Licence.
‘Use’ means any act which is restricted by copyright or database rights, whether in the
original medium or in any other medium, and includes, without limitation, distributing,
copying, adapting, or modifying as may be technically necessary to use the Work in a
different mode or format. It includes ‘re‐Use’, meaning the use, communication to the
public and/or distribution of the Works for purposes other than the initial purpose for which
the Work was produced.
#### Rights
You are herewith granted a worldwide, royalty‐free, perpetual, non‐exclusive Licence to Use
and re‐Use the Works and any modifications thereof for any commercial and non‐
commercial purpose allowed by the law, provided that the following conditions are met:
a) Unmodified distributions must retain the above Copyright Notice;
b) Unmodified distributions must retain the following ‘No Warranty’ disclaimer;
c) You will not use the name of the Owner to endorse or promote products and
services derived from Use of the Work without specific prior written permission.
#### No warranty
Each Work is provided ‘as is’ without, to the full extent permitted by law, representations,
warranties, obligations and liabilities of any kind, either express or implied, including, but
not limited to, any implied warranty of merchantability, integration, satisfactory quality and
fitness for a particular purpose.
Except in the cases of wilful misconduct or damages directly caused to natural persons, the
Owner will not be liable for any incidental, consequential, direct or indirect damages,
including, but not limited to, the loss of data, lost profits or any other financial loss arising
from the use of, or inability to use, the Work even if the Owner has been notified of the
possibility of such loss, damages, claims or costs, or for any claim by any third party. The
Owner may be liable under national statutory product liability laws as far as such laws apply
to the Work.
### Citation Information
```
@Article{Steinberger2014,
author={Steinberger, Ralf
and Ebrahim, Mohamed
and Poulis, Alexandros
and Carrasco-Benitez, Manuel
and Schl{\"u}ter, Patrick
and Przybyszewski, Marek
and Gilbro, Signe},
title={An overview of the European Union's highly multilingual parallel corpora},
journal={Language Resources and Evaluation},
year={2014},
month={Dec},
day={01},
volume={48},
number={4},
pages={679-707},
issn={1574-0218},
doi={10.1007/s10579-014-9277-0},
url={https://doi.org/10.1007/s10579-014-9277-0}
}
```
### Contributions
Thanks to [@SBrandeis](https://github.com/SBrandeis) for adding this dataset. |
time_dial | 2022-11-03T16:07:53.000Z | [
"task_categories:text-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"dialog-act-classification",
"arxiv:2106.04571",
"region:us"
] | null | TimeDial presents a crowdsourced English challenge set, for temporal commonsense reasoning, formulated
as a multiple choice cloze task with around 1.5k carefully curated dialogs. The dataset is derived from
the DailyDialog (Li et al., 2017), which is a multi-turn dialog corpus.
In order to establish strong baselines and provide information on future model development, we
conducted extensive experiments with state-of-the-art LMs. While humans can easily answer these
questions (97.8%), the best T5 model variant struggles on this challenge set (73%). Moreover, our
qualitative error analyses show that the models often rely on shallow, spurious features (particularly text
matching), instead of truly doing reasoning over the context. | @inproceedings{qin-etal-2021-timedial,
title = "{TimeDial: Temporal Commonsense Reasoning in Dialog}",
author = "Qin, Lianhui and Gupta, Aditya and Upadhyay, Shyam and He, Luheng and Choi, Yejin and Faruqui, Manaal",
booktitle = "Proc. of ACL",
year = "2021"
} | null | 2 | 78 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
pretty_name: 'TimeDial: Temporal Commonsense Reasoning in Dialog'
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-label-classification
paperswithcode_id: timedial
tags:
- dialog-act-classification
dataset_info:
features:
- name: id
dtype: int32
- name: conversation
sequence: string
- name: correct1
dtype: string
- name: correct2
dtype: string
- name: incorrect1
dtype: string
- name: incorrect1_rule
dtype: string
- name: incorrect2
dtype: string
- name: incorrect2_rule
dtype: string
splits:
- name: test
num_bytes: 1449879
num_examples: 1446
download_size: 1613806
dataset_size: 1449879
---
# Dataset Card for TimeDial: Temporal Commonsense Reasoning in Dialog
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [TimeDial](https://github.com/google-research-datasets/timedial)
- **Paper:** [TimeDial: Temporal Commonsense Reasoning in Dialog](https://arxiv.org/abs/2106.04571)
- **Point of Contact:** [Please create an issue in the official repository](https://github.com/google-research-datasets/timedial)
### Dataset Summary
TimeDial presents a crowdsourced English challenge set, for temporal commonsense reasoning, formulated as a multiple choice cloze task with around 1.5k carefully curated dialogs. The dataset is derived from the DailyDialog ([Li et al., 2017](https://www.aclweb.org/anthology/I17-1099/)), which is a multi-turn dialog corpus.
In order to establish strong baselines and provide information on future model development, the authors conducted extensive experiments with state-of-the-art LMs. While humans can easily answer these questions (97.8\%), the best T5 model variant struggles on this challenge set (73\%). Moreover, our qualitative error analyses show that the models often rely on shallow, spurious features (particularly text matching), instead of truly doing reasoning over the context.
Detailed experiments and analyses can be found in their [paper](https://arxiv.org/pdf/2106.04571.pdf).
### Supported Tasks and Leaderboards
To be updated soon.
### Languages
The dataset is in English only.
## Dataset Structure
### Data Instances
```
{
"id": 1,
"conversation": [
"A: We need to take the accounts system offline to carry out the upgrade . But don't worry , it won't cause too much inconvenience . We're going to do it over the weekend .",
"B: How long will the system be down for ?",
"A: We'll be taking everything offline in about two hours ' time . It'll be down for a minimum of twelve hours . If everything goes according to plan , it should be up again by 6 pm on Saturday .",
"B: That's fine . We've allowed <MASK> to be on the safe side ."
],
"correct1": "forty-eight hours",
"correct2": "50 hours ",
"incorrect1": "two hours ",
"incorrect1_rule": "Rule 1",
"incorrect2": "12 days ",
"incorrect2_rule": "Rule 2"
}
```
### Data Fields
- "id": Unique identifier, as a integer
- "conversation": Dialog context with <MASK> span, as a string
- "correct1": Original <MASK> span, as a string
- "correct2": Additional correct option provided by annotators, as a string
- "incorrect1": Incorrect option #1 provided by annotators, as a string
- "incorrect1_rule": One of phrase matching ("Rule 1"), numeral matching ("Rule 2"), or open ended ("Rule 3"), as a string
- "incorrect2": Incorrect option #2 provided by annotators, as a string
- "incorrect2_rule": One of phrase matching ("Rule 1"), numeral matching ("Rule 2"), or open ended ("Rule 3"), as a string
### Data Splits
TimeDial dataset consists only of a test set of 1,104 dialog instances with 2 correct and 2 incorrect options with the following statistics:
| | Avg. |
|-----|-----|
|Turns per Dialog | 11.7 |
|Words per Turn | 16.5 |
|Time Spans per Dialog | 3 |
## Dataset Creation
### Curation Rationale
Although previous works have studied temporal reasoning in natural language, they have either focused on specific time-related concepts in isolation, such as temporal ordering and relation extraction, and/or dealt with limited context, such as single-sentence-based question answering and natural language inference.
In this work, they make the first systematic study of temporal commonsense reasoning in a multi-turn dialog setting. The task involves complex reasoning that requires operations like comparison and arithmetic reasoning over temporal expressions and the need for commonsense and world knowledge.
### Source Data
#### Initial Data Collection and Normalization
The TIMEDIAL dataset is derived from DailyDialog data (Li et al., 2017), which is a multi-turn dialog corpus containing over 13K English dialogs. Dialogs in this dataset consist of turn-taking between two people on topics over 10 broad categories, ranging from daily lives to financial topics.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The data collection process involves two steps: (1) identifying dialogs that are rich in temporal expressions, and (2) asking human annotators to provide correct and incorrect options for cloze instances derived from these dialogs. More details about the two steps:
1) Temporal expression identification: Here, they select dialogs that are rich with temporal information, in order to focus on complex temporal reasoning that arises in natural dialogs. Temporal expressions are automatically identified with SU-Time, an off-the-shelf temporal expression detector. They keep only the dialogs with more than 3 temporal expressions and at least one expression that contains numerals like “two weeks” (as opposed to non-numeric spans, like “summer”, “right now”, and “later”). In their initial experiment, they observe that language models can often correctly predict these non-numerical temporal phrases.
2) Human annotated options: Next, they make spans in the dialogs. For a dialog, they mask out each temporal expression that contains numerals, each resulting in a cloze question that is then sent for human annotation.
This resulted in 1,526 instances for annotation. For each masked span in each dialog, they obtain human annotation to derive a fixed set of correct and incorrect options given the context. Concretely, given a masked dialog and a seed correct answer (i.e., the original text) for the masked span, the annotators were asked to (1) come up with an alternative correct answer that makes sense in the dialog adhering to commonsense, and (2) formulate two incorrect answers that have no possibility of making sense in the dialog context. They highlight all time expressions in the context to make it easier for annotators to select reasonable time expressions.
#### Who are the annotators?
They are English linguists.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
TimeDial dataset is licensed under CC BY-NC-SA 4.0.
### Citation Information
```
@inproceedings{qin-etal-2021-timedial,
title = "{TimeDial: Temporal Commonsense Reasoning in Dialog}",
author = "Qin, Lianhui and Gupta, Aditya and Upadhyay, Shyam and He, Luheng and Choi, Yejin and Faruqui, Manaal",
booktitle = "Proc. of ACL",
year = "2021"
}
```
### Contributions
Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset. |
Kira-Asimov/gender_clinical_trial | 2022-02-10T10:16:03.000Z | [
"region:us"
] | Kira-Asimov | null | null | null | 2 | 78 | # Gender classification from Clinical Trial Public Data
|
llangnickel/long-covid-classification-data | 2022-11-24T10:29:58.000Z | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | llangnickel | null | null | null | 0 | 78 | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: 'Dataset containing abstracts from PubMed, either related to long COVID
or not. '
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-classification
---
## Data Description
Long-COVID related articles have been manually collected by information specialists.
Please find further information [here](https://doi.org/10.1093/database/baac048).
## Size
||Training|Development|Test|Total|
|--|--|--|--|--|
Positive Examples|215|76|70|345|
Negative Examples|199|62|68|345|
Total|414|238|138|690|
## Citation
@article{10.1093/database/baac048,
author = {Langnickel, Lisa and Darms, Johannes and Heldt, Katharina and Ducks, Denise and Fluck, Juliane},
title = "{Continuous development of the semantic search engine preVIEW: from COVID-19 to long COVID}",
journal = {Database},
volume = {2022},
year = {2022},
month = {07},
issn = {1758-0463},
doi = {10.1093/database/baac048},
url = {https://doi.org/10.1093/database/baac048},
note = {baac048},
eprint = {https://academic.oup.com/database/article-pdf/doi/10.1093/database/baac048/44371817/baac048.pdf},
} |
BeIR/scidocs-generated-queries | 2022-10-23T06:12:52.000Z | [
"task_categories:text-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:fact-checking-retrieval",
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | BeIR | null | null | null | 2 | 78 | ---
annotations_creators: []
language_creators: []
language:
- en
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
paperswithcode_id: beir
pretty_name: BEIR Benchmark
size_categories:
msmarco:
- 1M<n<10M
trec-covid:
- 100k<n<1M
nfcorpus:
- 1K<n<10K
nq:
- 1M<n<10M
hotpotqa:
- 1M<n<10M
fiqa:
- 10K<n<100K
arguana:
- 1K<n<10K
touche-2020:
- 100K<n<1M
cqadupstack:
- 100K<n<1M
quora:
- 100K<n<1M
dbpedia:
- 1M<n<10M
scidocs:
- 10K<n<100K
fever:
- 1M<n<10M
climate-fever:
- 1M<n<10M
scifact:
- 1K<n<10K
source_datasets: []
task_categories:
- text-retrieval
- zero-shot-retrieval
- information-retrieval
- zero-shot-information-retrieval
task_ids:
- passage-retrieval
- entity-linking-retrieval
- fact-checking-retrieval
- tweet-retrieval
- citation-prediction-retrieval
- duplication-question-retrieval
- argument-retrieval
- news-retrieval
- biomedical-information-retrieval
- question-answering-retrieval
---
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/UKPLab/beir
- **Repository:** https://github.com/UKPLab/beir
- **Paper:** https://openreview.net/forum?id=wCu6T5xFjeJ
- **Leaderboard:** https://docs.google.com/spreadsheets/d/1L8aACyPaXrL8iEelJLGqlMqXKPX2oSP_R10pZoy77Ns
- **Point of Contact:** nandan.thakur@uwaterloo.ca
### Dataset Summary
BEIR is a heterogeneous benchmark that has been built from 18 diverse datasets representing 9 information retrieval tasks:
- Fact-checking: [FEVER](http://fever.ai), [Climate-FEVER](http://climatefever.ai), [SciFact](https://github.com/allenai/scifact)
- Question-Answering: [NQ](https://ai.google.com/research/NaturalQuestions), [HotpotQA](https://hotpotqa.github.io), [FiQA-2018](https://sites.google.com/view/fiqa/)
- Bio-Medical IR: [TREC-COVID](https://ir.nist.gov/covidSubmit/index.html), [BioASQ](http://bioasq.org), [NFCorpus](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/)
- News Retrieval: [TREC-NEWS](https://trec.nist.gov/data/news2019.html), [Robust04](https://trec.nist.gov/data/robust/04.guidelines.html)
- Argument Retrieval: [Touche-2020](https://webis.de/events/touche-20/shared-task-1.html), [ArguAna](tp://argumentation.bplaced.net/arguana/data)
- Duplicate Question Retrieval: [Quora](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs), [CqaDupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
- Citation-Prediction: [SCIDOCS](https://allenai.org/data/scidocs)
- Tweet Retrieval: [Signal-1M](https://research.signal-ai.com/datasets/signal1m-tweetir.html)
- Entity Retrieval: [DBPedia](https://github.com/iai-group/DBpedia-Entity/)
All these datasets have been preprocessed and can be used for your experiments.
```python
```
### Supported Tasks and Leaderboards
The dataset supports a leaderboard that evaluates models against task-specific metrics such as F1 or EM, as well as their ability to retrieve supporting information from Wikipedia.
The current best performing models can be found [here](https://eval.ai/web/challenges/challenge-page/689/leaderboard/).
### Languages
All tasks are in English (`en`).
## Dataset Structure
All BEIR datasets must contain a corpus, queries and qrels (relevance judgments file). They must be in the following format:
- `corpus` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with three fields `_id` with unique document identifier, `title` with document title (optional) and `text` with document paragraph or passage. For example: `{"_id": "doc1", "title": "Albert Einstein", "text": "Albert Einstein was a German-born...."}`
- `queries` file: a `.jsonl` file (jsonlines) that contains a list of dictionaries, each with two fields `_id` with unique query identifier and `text` with query text. For example: `{"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}`
- `qrels` file: a `.tsv` file (tab-seperated) that contains three columns, i.e. the `query-id`, `corpus-id` and `score` in this order. Keep 1st row as header. For example: `q1 doc1 1`
### Data Instances
A high level example of any beir dataset:
```python
corpus = {
"doc1" : {
"title": "Albert Einstein",
"text": "Albert Einstein was a German-born theoretical physicist. who developed the theory of relativity, \
one of the two pillars of modern physics (alongside quantum mechanics). His work is also known for \
its influence on the philosophy of science. He is best known to the general public for his mass–energy \
equivalence formula E = mc2, which has been dubbed 'the world's most famous equation'. He received the 1921 \
Nobel Prize in Physics 'for his services to theoretical physics, and especially for his discovery of the law \
of the photoelectric effect', a pivotal step in the development of quantum theory."
},
"doc2" : {
"title": "", # Keep title an empty string if not present
"text": "Wheat beer is a top-fermented beer which is brewed with a large proportion of wheat relative to the amount of \
malted barley. The two main varieties are German Weißbier and Belgian witbier; other types include Lambic (made\
with wild yeast), Berliner Weisse (a cloudy, sour beer), and Gose (a sour, salty beer)."
},
}
queries = {
"q1" : "Who developed the mass-energy equivalence formula?",
"q2" : "Which beer is brewed with a large proportion of wheat?"
}
qrels = {
"q1" : {"doc1": 1},
"q2" : {"doc2": 1},
}
```
### Data Fields
Examples from all configurations have the following features:
### Corpus
- `corpus`: a `dict` feature representing the document title and passage text, made up of:
- `_id`: a `string` feature representing the unique document id
- `title`: a `string` feature, denoting the title of the document.
- `text`: a `string` feature, denoting the text of the document.
### Queries
- `queries`: a `dict` feature representing the query, made up of:
- `_id`: a `string` feature representing the unique query id
- `text`: a `string` feature, denoting the text of the query.
### Qrels
- `qrels`: a `dict` feature representing the query document relevance judgements, made up of:
- `_id`: a `string` feature representing the query id
- `_id`: a `string` feature, denoting the document id.
- `score`: a `int32` feature, denoting the relevance judgement between query and document.
### Data Splits
| Dataset | Website| BEIR-Name | Type | Queries | Corpus | Rel D/Q | Down-load | md5 |
| -------- | -----| ---------| --------- | ----------- | ---------| ---------| :----------: | :------:|
| MSMARCO | [Homepage](https://microsoft.github.io/msmarco/)| ``msmarco`` | ``train``<br>``dev``<br>``test``| 6,980 | 8.84M | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/msmarco.zip) | ``444067daf65d982533ea17ebd59501e4`` |
| TREC-COVID | [Homepage](https://ir.nist.gov/covidSubmit/index.html)| ``trec-covid``| ``test``| 50| 171K| 493.5 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/trec-covid.zip) | ``ce62140cb23feb9becf6270d0d1fe6d1`` |
| NFCorpus | [Homepage](https://www.cl.uni-heidelberg.de/statnlpgroup/nfcorpus/) | ``nfcorpus`` | ``train``<br>``dev``<br>``test``| 323 | 3.6K | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nfcorpus.zip) | ``a89dba18a62ef92f7d323ec890a0d38d`` |
| BioASQ | [Homepage](http://bioasq.org) | ``bioasq``| ``train``<br>``test`` | 500 | 14.91M | 8.05 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#2-bioasq) |
| NQ | [Homepage](https://ai.google.com/research/NaturalQuestions) | ``nq``| ``train``<br>``test``| 3,452 | 2.68M | 1.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/nq.zip) | ``d4d3d2e48787a744b6f6e691ff534307`` |
| HotpotQA | [Homepage](https://hotpotqa.github.io) | ``hotpotqa``| ``train``<br>``dev``<br>``test``| 7,405 | 5.23M | 2.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/hotpotqa.zip) | ``f412724f78b0d91183a0e86805e16114`` |
| FiQA-2018 | [Homepage](https://sites.google.com/view/fiqa/) | ``fiqa`` | ``train``<br>``dev``<br>``test``| 648 | 57K | 2.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fiqa.zip) | ``17918ed23cd04fb15047f73e6c3bd9d9`` |
| Signal-1M(RT) | [Homepage](https://research.signal-ai.com/datasets/signal1m-tweetir.html)| ``signal1m`` | ``test``| 97 | 2.86M | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#4-signal-1m) |
| TREC-NEWS | [Homepage](https://trec.nist.gov/data/news2019.html) | ``trec-news`` | ``test``| 57 | 595K | 19.6 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#1-trec-news) |
| ArguAna | [Homepage](http://argumentation.bplaced.net/arguana/data) | ``arguana``| ``test`` | 1,406 | 8.67K | 1.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/arguana.zip) | ``8ad3e3c2a5867cdced806d6503f29b99`` |
| Touche-2020| [Homepage](https://webis.de/events/touche-20/shared-task-1.html) | ``webis-touche2020``| ``test``| 49 | 382K | 19.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/webis-touche2020.zip) | ``46f650ba5a527fc69e0a6521c5a23563`` |
| CQADupstack| [Homepage](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) | ``cqadupstack``| ``test``| 13,145 | 457K | 1.4 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/cqadupstack.zip) | ``4e41456d7df8ee7760a7f866133bda78`` |
| Quora| [Homepage](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs) | ``quora``| ``dev``<br>``test``| 10,000 | 523K | 1.6 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/quora.zip) | ``18fb154900ba42a600f84b839c173167`` |
| DBPedia | [Homepage](https://github.com/iai-group/DBpedia-Entity/) | ``dbpedia-entity``| ``dev``<br>``test``| 400 | 4.63M | 38.2 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/dbpedia-entity.zip) | ``c2a39eb420a3164af735795df012ac2c`` |
| SCIDOCS| [Homepage](https://allenai.org/data/scidocs) | ``scidocs``| ``test``| 1,000 | 25K | 4.9 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scidocs.zip) | ``38121350fc3a4d2f48850f6aff52e4a9`` |
| FEVER | [Homepage](http://fever.ai) | ``fever``| ``train``<br>``dev``<br>``test``| 6,666 | 5.42M | 1.2| [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/fever.zip) | ``5a818580227bfb4b35bb6fa46d9b6c03`` |
| Climate-FEVER| [Homepage](http://climatefever.ai) | ``climate-fever``|``test``| 1,535 | 5.42M | 3.0 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/climate-fever.zip) | ``8b66f0a9126c521bae2bde127b4dc99d`` |
| SciFact| [Homepage](https://github.com/allenai/scifact) | ``scifact``| ``train``<br>``test``| 300 | 5K | 1.1 | [Link](https://public.ukp.informatik.tu-darmstadt.de/thakur/BEIR/datasets/scifact.zip) | ``5f7d1de60b170fc8027bb7898e2efca1`` |
| Robust04 | [Homepage](https://trec.nist.gov/data/robust/04.guidelines.html) | ``robust04``| ``test``| 249 | 528K | 69.9 | No | [How to Reproduce?](https://github.com/UKPLab/beir/blob/main/examples/dataset#3-robust04) |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
[Needs More Information]
### Citation Information
Cite as:
```
@inproceedings{
thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Nandan Thakur and Nils Reimers and Andreas R{\"u}ckl{\'e} and Abhishek Srivastava and Iryna Gurevych},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
```
### Contributions
Thanks to [@Nthakur20](https://github.com/Nthakur20) for adding this dataset. |
allenai/csabstruct | 2022-11-02T17:54:38.000Z | [
"license:apache-2.0",
"arxiv:1909.04054",
"region:us"
] | allenai | As a step toward better document-level understanding, we explore classification of a sequence of sentences into their corresponding categories, a task that requires understanding sentences in context of the document. Recent successful models for this task have used hierarchical models to contextualize sentence representations, and Conditional Random Fields (CRFs) to incorporate dependencies between subsequent labels. In this work, we show that pretrained language models, BERT (Devlin et al., 2018) in particular, can be used for this task to capture contextual dependencies without the need for hierarchical encoding nor a CRF. Specifically, we construct a joint sentence representation that allows BERT Transformer layers to directly utilize contextual information from all words in all sentences. Our approach achieves state-of-the-art results on four datasets, including a new dataset of structured scientific abstracts. | @inproceedings{Cohan2019EMNLP,
title={Pretrained Language Models for Sequential Sentence Classification},
author={Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Dan Weld},
year={2019},
booktitle={EMNLP},
} | null | 2 | 78 | ---
license: apache-2.0
---
# CSAbstruct
CSAbstruct was created as part of *"Pretrained Language Models for Sequential Sentence Classification"* ([ACL Anthology][2], [arXiv][1], [GitHub][6]).
It contains 2,189 manually annotated computer science abstracts with sentences annotated according to their rhetorical roles in the abstract, similar to the [PUBMED-RCT][3] categories.
## Dataset Construction Details
CSAbstruct is a new dataset of annotated computer science abstracts with sentence labels according to their rhetorical roles.
The key difference between this dataset and [PUBMED-RCT][3] is that PubMed abstracts are written according to a predefined structure, whereas computer science papers are free-form.
Therefore, there is more variety in writing styles in CSAbstruct.
CSAbstruct is collected from the Semantic Scholar corpus [(Ammar et a3., 2018)][4].
E4ch sentence is annotated by 5 workers on the [Figure-eight platform][5], with one of 5 categories `{BACKGROUND, OBJECTIVE, METHOD, RESULT, OTHER}`.
We use 8 abstracts (with 51 sentences) as test questions to train crowdworkers.
Annotators whose accuracy is less than 75% are disqualified from doing the actual annotation job.
The annotations are aggregated using the agreement on a single sentence weighted by the accuracy of the annotator on the initial test questions.
A confidence score is associated with each instance based on the annotator initial accuracy and agreement of all annotators on that instance.
We then split the dataset 75%/15%/10% into train/dev/test partitions, such that the test set has the highest confidence scores.
Agreement rate on a random subset of 200 sentences is 75%, which is quite high given the difficulty of the task.
Compared with [PUBMED-RCT][3], our dataset exhibits a wider variety of writ- ing styles, since its abstracts are not written with an explicit structural template.
## Dataset Statistics
| Statistic | Avg ± std |
|--------------------------|-------------|
| Doc length in sentences | 6.7 ± 1.99 |
| Sentence length in words | 21.8 ± 10.0 |
| Label | % in Dataset |
|---------------|--------------|
| `BACKGROUND` | 33% |
| `METHOD` | 32% |
| `RESULT` | 21% |
| `OBJECTIVE` | 12% |
| `OTHER` | 03% |
## Citation
If you use this dataset, please cite the following paper:
```
@inproceedings{Cohan2019EMNLP,
title={Pretrained Language Models for Sequential Sentence Classification},
author={Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Dan Weld},
year={2019},
booktitle={EMNLP},
}
```
[1]: https://arxiv.org/abs/1909.04054
[2]: https://aclanthology.org/D19-1383
[3]: https://github.com/Franck-Dernoncourt/pubmed-rct
[4]: https://aclanthology.org/N18-3011/
[5]: https://www.figure-eight.com/
[6]: https://github.com/allenai/sequential_sentence_classification
|
tomas-gajarsky/cifar10-lt | 2023-06-24T20:22:30.000Z | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:cifar10",
"language:en",
"license:apache-2.0",
"region:us"
] | tomas-gajarsky | The CIFAR-10-LT imbalanced dataset is comprised of under 60,000 color images, each measuring 32x32 pixels,
distributed across 10 distinct classes.
The dataset includes 10,000 test images, with 1000 images per class,
and fewer than 50,000 training images.
The number of samples within each class of the train set decreases exponentially with factors of 10, 50 or 100. | @TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
} | null | 1 | 78 | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license: apache-2.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- cifar10
task_categories:
- image-classification
task_ids: []
paperswithcode_id: cifar-10
pretty_name: Cifar10-LT
dataset_info:
features:
- name: img
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
config_name: cifar10
splits:
- name: train
- name: test
num_bytes: 22772838
num_examples: 10000
download_size: 170498071
---
# Dataset Card for CIFAR-10-LT (Long Tail)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [CIFAR Datasets](https://www.cs.toronto.edu/~kriz/cifar.html)
- **Paper:** [Paper imbalanced example](https://openaccess.thecvf.com/content_CVPR_2019/papers/Cui_Class-Balanced_Loss_Based_on_Effective_Number_of_Samples_CVPR_2019_paper.pdf)
- **Leaderboard:** [r-10](https://paperswithcode.com/sota/long-tail-learning-on-cifar-10-lt-r-10) [r-100](https://paperswithcode.com/sota/long-tail-learning-on-cifar-10-lt-r-100)
### Dataset Summary
The CIFAR-10-LT imbalanced dataset is comprised of under 60,000 color images, each measuring 32x32 pixels,
distributed across 10 distinct classes.
The number of samples within each class decreases exponentially with factors of 10 and 100.
The dataset includes 10,000 test images, with 1000 images per class,
and fewer than 50,000 training images.
Each image is assigned one label.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image into one of 10 classes. The leaderboard is available [here](https://paperswithcode.com/sota/long-tail-learning-on-cifar-10-lt-r-100).
### Languages
English
## Dataset Structure
### Data Instances
A sample from the training set is provided below:
```
{
'img': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32 at 0x2767F58E080>, 'label': 0
}
```
### Data Fields
- img: A `PIL.Image.Image` object containing the 32x32 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- label: 0-9 with the following correspondence
0 airplane
1 automobile
2 bird
3 cat
4 deer
5 dog
6 frog
7 horse
8 ship
9 truck
### Data Splits
| name |train|test|
|----------|----:|---------:|
|cifar10|<50000| 10000|
### Licensing Information
Apache License 2.0
### Citation Information
```
@TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchablani) and all contributors for adding the original balanced cifar10 dataset. |
Patt/copa_th | 2023-06-05T12:36:44.000Z | [
"language:th",
"language:en",
"arxiv:1907.04307",
"region:us"
] | Patt | null | null | null | 0 | 78 | ---
language:
- th
- en
---
# Dataset Card for copa_th
### Dataset Description
This dataset is Thai translated version of [copa](https://huggingface.co/datasets/super_glue/viewer/copa) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
### Languages
- EN
- TH |
bloyal/deeploc | 2023-08-15T13:46:01.000Z | [
"license:cc-by-4.0",
"region:us"
] | bloyal | null | null | null | 0 | 78 | ---
license: cc-by-4.0
---
# DeepLoc-2.0 Training Data
Dataset from https://services.healthtech.dtu.dk/services/DeepLoc-2.0/ used to train the DeepLoc-2.0 model.
## Data preparation
Data downloaded and processed using the following Python script:
```python
import pandas as pd
df = pd.read_csv('https://services.healthtech.dtu.dk/services/DeepLoc-2.0/data/Swissprot_Train_Validation_dataset.csv').drop(['Unnamed: 0', 'Partition'], axis=1)
df['labels'] = df[['Cell membrane', 'Cytoplasm','Endoplasmic reticulum', 'Extracellular', 'Golgi apparatus', 'Lysosome/Vacuole', 'Mitochondrion', 'Nucleus', 'Peroxisome', 'Plastid']].astype('float32').values.tolist()
df['Membrane'] = df['Membrane'].astype('float32')
df = df[['Kingdom', 'ACC', 'Sequence','Membrane','labels']]
train = df.sample(frac=0.8)
df = df.drop(train.index)
val = df.sample(frac=0.5)
test = df.drop(val.index)
train = train.reset_index(drop=True)
val = val.reset_index(drop=True)
test = test.reset_index(drop=True)
train.to_parquet('deeploc-train.parquet', index=False)
val.to_parquet('deploc-val.parquet', index=False)
test.to_parquet('deeploc-test.parquet', index=False)
```
## Labels
{'Cell membrane': 0,
'Cytoplasm': 1,
'Endoplasmic reticulum': 2,
'Extracellular': 3,
'Golgi apparatus': 4,
'Lysosome/Vacuole': 5,
'Mitochondrion': 6,
'Nucleus': 7,
'Peroxisome': 8,
'Plastid': 9}
## Citation
**DeepLoc-2.0:**
```
Vineet Thumuluri and others, DeepLoc 2.0: multi-label subcellular localization prediction using protein language models, Nucleic Acids Research, Volume 50, Issue W1, 5 July 2022, Pages W228–W234, https://doi.org/10.1093/nar/gkac278
```
The DeepLoc data is a derivative of the UniProt dataset:
**UniProt**
```
The UniProt Consortium
UniProt: the Universal Protein Knowledgebase in 2023
Nucleic Acids Res. 51:D523–D531 (2023)
```
|
yzhuang/autotree_pmlb_100000_phoneme_sgosdt_l256_dim10_d3_sd0 | 2023-09-07T22:51:22.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 78 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 1852400000
num_examples: 100000
- name: validation
num_bytes: 185240000
num_examples: 10000
download_size: 378644599
dataset_size: 2037640000
---
# Dataset Card for "autotree_pmlb_100000_phoneme_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_automl_100000_Higgs_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T16:23:06.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 78 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 1584979274
dataset_size: 2600840000
---
# Dataset Card for "autotree_automl_100000_Higgs_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_automl_100000_MiniBooNE_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T18:21:37.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 78 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 1613809341
dataset_size: 2600840000
---
# Dataset Card for "autotree_automl_100000_MiniBooNE_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_automl_100000_jannis_sgosdt_l256_dim10_d3_sd0 | 2023-09-08T18:46:08.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 78 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 1611785428
dataset_size: 2600840000
---
# Dataset Card for "autotree_automl_100000_jannis_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
yzhuang/autotree_automl_100000_heloc_sgosdt_l256_dim10_d3_sd0 | 2023-09-09T04:05:06.000Z | [
"region:us"
] | yzhuang | null | null | null | 0 | 78 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: input_x
sequence:
sequence: float32
- name: input_y
sequence:
sequence: float32
- name: input_y_clean
sequence:
sequence: float32
- name: rtg
sequence: float64
- name: status
sequence:
sequence: float32
- name: split_threshold
sequence:
sequence: float32
- name: split_dimension
sequence: int64
splits:
- name: train
num_bytes: 2364400000
num_examples: 100000
- name: validation
num_bytes: 236440000
num_examples: 10000
download_size: 447651417
dataset_size: 2600840000
---
# Dataset Card for "autotree_automl_100000_heloc_sgosdt_l256_dim10_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LDJnr/Pure-Dove | 2023-09-26T04:29:58.000Z | [
"task_categories:conversational",
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"Physics",
"Biology",
"Math",
"Chemistry",
"Culture",
"Logic",
"Roleplay",
"region:us"
] | LDJnr | null | null | null | 7 | 78 | ---
license: apache-2.0
task_categories:
- conversational
- question-answering
- text-generation
language:
- en
tags:
- Physics
- Biology
- Math
- Chemistry
- Culture
- Logic
- Roleplay
pretty_name: Pure-Dove
size_categories:
- 1K<n<10K
---
## This is the Official Pure-Dove dataset. Over 3K multi-turn examples, and many more coming soon!
This dataset aims to be the largest highest quality cluster of real human back and forth conversations with GPT-4.
Steps have even been done to ensure that only the best GPT-4 conversations in comparisons are kept, there are many instances where two GPT-4 responses are rated as equal to eachother or as both bad. We exclude all such responses from Pure Dove and make sure to only include ChatBot Arena responses that are voted as being better even against another instance of GPT-4.
- Comprised of over 3000 highly filtered multi-turn conversations between GPT-4 and real humans.
- Average context length per conversation is over 800 tokens.
## Purpose?
- This dataset is not particularly intended to be trained on by itself, however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such!
## Quality filtering and cleaning.
- The conversations were sourced from openly datasets such as ShareGPT and ChatBotArena by Lmsys, however, a large portion of these chats were riddled with hallucinations and abnormal distributions of different languages.
- Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as "As an AI language model" and "September 2021", not just in english, but other languages too!
## Credits
During the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out.
Luckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big credit to J-Supha within NousResearch specifically for making these types of significant contributions.
## Future Plans & How you can help!
This is a relatively early build amongst the grand plans for the future of what I plan to work on!
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord! |
anzorq/ru-kbd | 2023-09-28T19:54:16.000Z | [
"region:us"
] | anzorq | null | null | null | 0 | 78 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: translation
struct:
- name: ru
dtype: string
- name: kbd
dtype: string
splits:
- name: train
num_bytes: 5143079.101673745
num_examples: 63407
- name: test
num_bytes: 51992.89832625531
num_examples: 641
download_size: 3174015
dataset_size: 5195072.0
---
# Dataset Card for "ru-kbd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TaylorAI/rlcd | 2023-09-29T18:53:15.000Z | [
"region:us"
] | TaylorAI | null | null | null | 0 | 78 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 145740702
num_examples: 167999
download_size: 86967331
dataset_size: 145740702
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "rlcd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jondurbin/airoboros-3.0 | 2023-10-05T15:16:30.000Z | [
"license:apache-2.0",
"region:us"
] | jondurbin | null | null | null | 8 | 78 | ---
license: apache-2.0
---
## Overview
This dataset is a continuation of the airoboros datasets, with two main new contributions:
* MathJSON - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Anon-contributed RP dataset to enhance multi-turn coherency.
Some of the MathJSON data was adapted from https://huggingface.co/datasets/meta-math/MetaMathQA
## Format
The format is now in ShareGPT format, to better accomodate the OS ecosystem fine-tuning tooling.
## MathJSON
As we all know, math is really difficult for LLMs, especially floating point operations, trig functions, factorials, and several-digit numbers.
Instead of sending millions and millions of examples to the LLM for fine-tuning, I've created a new category `mathjson` which simply presents the solution to the problem as a JSON string that can be evaluated by traditional computation.
The dataset has roughly 4k samples, as a proof of concept. This can be expanded fairly easily, now that the fine-tuned models have a basic understanding of how to represent the problems.
For example:
```
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
Simply extract the JSON string between the mathjson tags, and evaluate it with https://cortexjs.io/compute-engine/ or my ultra-hacky limited-functionality python version of it https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py |
roy650/puext550refExp_aug | 2023-10-04T13:46:19.000Z | [
"region:us"
] | roy650 | null | null | null | 0 | 78 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: image_id
dtype: string
- name: prompt
dtype: string
- name: target_bounding_box
struct:
- name: xmax
dtype: string
- name: xmin
dtype: string
- name: ymax
dtype: string
- name: ymin
dtype: string
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 4225046902.25
num_examples: 25229
- name: test
num_bytes: 699064021.664
num_examples: 2804
download_size: 2603656355
dataset_size: 4924110923.914
---
# Dataset Card for "puext550refExp_aug"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
recipe_nlg | 2023-01-25T14:43:04.000Z | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-retrieval",
"task_categories:summarization",
"task_ids:document-retrieval",
"task_ids:entity-linking-retrieval",
"task_ids:explanation-generation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | null | The dataset contains 2231142 cooking recipes (>2 millions). It's processed in more careful way and provides more samples than any other dataset in the area. | @inproceedings{bien-etal-2020-recipenlg,
title = "{R}ecipe{NLG}: A Cooking Recipes Dataset for Semi-Structured Text Generation",
author = "Bie{'n}, Micha{l} and
Gilski, Micha{l} and
Maciejewska, Martyna and
Taisner, Wojciech and
Wisniewski, Dawid and
Lawrynowicz, Agnieszka",
booktitle = "Proceedings of the 13th International Conference on Natural Language Generation",
month = dec,
year = "2020",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.inlg-1.4",
pages = "22--28"
} | null | 21 | 77 | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text2text-generation
- text-generation
- fill-mask
- text-retrieval
- summarization
task_ids:
- document-retrieval
- entity-linking-retrieval
- explanation-generation
- language-modeling
- masked-language-modeling
paperswithcode_id: recipenlg
pretty_name: RecipeNLG
dataset_info:
features:
- name: id
dtype: int32
- name: title
dtype: string
- name: ingredients
sequence: string
- name: directions
sequence: string
- name: link
dtype: string
- name: source
dtype:
class_label:
names:
'0': Gathered
'1': Recipes1M
- name: ner
sequence: string
splits:
- name: train
num_bytes: 2194783815
num_examples: 2231142
download_size: 0
dataset_size: 2194783815
---
# Dataset Card for RecipeNLG
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://recipenlg.cs.put.poznan.pl/
- **Repository:** https://github.com/Glorf/recipenlg
- **Paper:** https://www.aclweb.org/anthology/volumes/2020.inlg-1/
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation.
While the RecipeNLG dataset is based on the Recipe1M+ dataset, it greatly expands the number of recipes available.
The new dataset provides over 1 million new, preprocessed and deduplicated recipes on top of the Recipe1M+ dataset.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
```
{'id': 0,
'title': 'No-Bake Nut Cookies',
'ingredients': ['1 c. firmly packed brown sugar',
'1/2 c. evaporated milk',
'1/2 tsp. vanilla',
'1/2 c. broken nuts (pecans)',
'2 Tbsp. butter or margarine',
'3 1/2 c. bite size shredded rice biscuits'],
'directions': ['In a heavy 2-quart saucepan, mix brown sugar, nuts, evaporated milk and butter or margarine.',
'Stir over medium heat until mixture bubbles all over top.',
'Boil and stir 5 minutes more. Take off heat.',
'Stir in vanilla and cereal; mix well.',
'Using 2 teaspoons, drop and shape into 30 clusters on wax paper.',
'Let stand until firm, about 30 minutes.'],
'link': 'www.cookbooks.com/Recipe-Details.aspx?id=44874',
'source': 0,
'ner': ['brown sugar',
'milk',
'vanilla',
'nuts',
'butter',
'bite size shredded rice biscuits']}
```
### Data Fields
- `id` (`int`): ID.
- `title` (`str`): Title of the recipe.
- `ingredients` (`list` of `str`): Ingredients.
- `directions` (`list` of `str`): Instruction steps.
- `link` (`str`): URL link.
- `source` (`ClassLabel`): Origin of each recipe record, with possible value {"Gathered", "Recipes1M"}:
- "Gathered" (0): Additional recipes gathered from multiple cooking web pages, using automated scripts in a web scraping process.
- "Recipes1M" (1): Recipes from "Recipe1M+" dataset.
- `ner` (`list` of `str`): NER food entities.
### Data Splits
The dataset contains a single `train` split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
I (the "Researcher") have requested permission to use the RecipeNLG dataset (the "Dataset") at Poznań University of Technology (PUT). In exchange for such permission, Researcher hereby agrees to the following terms and conditions:
1. Researcher shall use the Dataset only for non-commercial research and educational purposes.
2. PUT makes no representations or warranties regarding the Dataset, including but not limited to warranties of non-infringement or fitness for a particular purpose.
3. Researcher accepts full responsibility for his or her use of the Dataset and shall defend and indemnify PUT, including its employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Dataset including but not limited to Researcher's use of any copies of copyrighted images or text that he or she may create from the Dataset.
4. Researcher may provide research associates and colleagues with access to the Dataset provided that they first agree to be bound by these terms and conditions.
5. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.
### Citation Information
```bibtex
@inproceedings{bien-etal-2020-recipenlg,
title = "{R}ecipe{NLG}: A Cooking Recipes Dataset for Semi-Structured Text Generation",
author = "Bie{\'n}, Micha{\l} and
Gilski, Micha{\l} and
Maciejewska, Martyna and
Taisner, Wojciech and
Wisniewski, Dawid and
Lawrynowicz, Agnieszka",
booktitle = "Proceedings of the 13th International Conference on Natural Language Generation",
month = dec,
year = "2020",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.inlg-1.4",
pages = "22--28",
}
```
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
GEM/wiki_cat_sum | 2022-10-24T15:31:11.000Z | [
"task_categories:summarization",
"annotations_creators:automatically-created",
"language_creators:unknown",
"multilinguality:unknown",
"size_categories:unknown",
"source_datasets:original",
"language:en",
"license:cc-by-sa-3.0",
"arxiv:1906.04687",
"arxiv:1801.10198",
"arxiv:2009.07032",
"region:us"
] | GEM | Summarise the most important facts of a given entity in the Film, Company, and Animal domains from a cluster of related documents. | @inproceedings{perez2019generating,
title={Generating Summaries with Topic Templates and Structured Convolutional Decoders},
author={Perez-Beltrachini, Laura and Liu, Yang and Lapata, Mirella},
booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
pages={5107--5116},
year={2019}
} | null | 2 | 77 | ---
annotations_creators:
- automatically-created
language_creators:
- unknown
language:
- en
license:
- cc-by-sa-3.0
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
task_ids: []
pretty_name: wiki_cat_sum
---
# Dataset Card for GEM/wiki_cat_sum
## Dataset Description
- **Homepage:** https://github.com/lauhaide/WikiCatSum
- **Repository:** https://datashare.ed.ac.uk/handle/10283/3368
- **Paper:** https://arxiv.org/abs/1906.04687
- **Leaderboard:** N/A
- **Point of Contact:** Laura Perez-Beltrachini
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/wiki_cat_sum).
### Dataset Summary
WikiCatSum is an English summarization dataset in three domains: animals, companies, and film. It provides multiple paragraphs of text paired with a summary of the paragraphs.
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/wiki_cat_sum')
```
The data loader can be found [here](https://huggingface.co/datasets/GEM/wiki_cat_sum).
#### website
[Github](https://github.com/lauhaide/WikiCatSum)
#### paper
[Arxiv](https://arxiv.org/abs/1906.04687)
#### authors
Laura Perez-Beltrachini, Yang Liu, Mirella Lapata (University of Edinburgh) Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, Noam Shazeer (GoogleBrain)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
[Github](https://github.com/lauhaide/WikiCatSum)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
[Website](https://datashare.ed.ac.uk/handle/10283/3368)
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
[Arxiv](https://arxiv.org/abs/1906.04687)
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
```
@inproceedings{perez-beltrachini-etal-2019-generating,
title = "Generating Summaries with Topic Templates and Structured Convolutional Decoders",
author = "Perez-Beltrachini, Laura and
Liu, Yang and
Lapata, Mirella",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1504",
doi = "10.18653/v1/P19-1504",
}
```
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Laura Perez-Beltrachini
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
lperez@ed.ac.uk
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
no
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-sa-3.0: Creative Commons Attribution Share Alike 3.0 Unported
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
Research on multi-document abstractive summarisation.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Summarise the most important facts of a given entity in the Film, Company, and Animal domains from a cluster of related documents.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`industry`, `academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Google Cloud Platform, University of Edinburgh
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Laura Perez-Beltrachini, Yang Liu, Mirella Lapata (University of Edinburgh) Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, Noam Shazeer (GoogleBrain)
#### Funding
<!-- info: Who funded the data creation? -->
<!-- scope: microscope -->
Google Cloud Platform, European Research Council
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Ronald Cardenas (University of Edinburgh) Laura Perez-Beltrachini (University of Edinburgh)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
- `id`: ID of the data example
- `title`: Is the Wikipedia article's title
- `paragraphs`: Is the ranked list of paragraphs from the set of crawled texts
- `summary`: Is constituted by a list of sentences together with their corresponding topic label
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
This is a truncated example from the animal setting:
```
{'gem_id': 'animal-train-1',
'gem_parent_id': 'animal-train-1',
'id': '2652',
'paragraphs': ["lytrosis (hulst) of louisiana vernon antoine brou jr. 2005. southern lepidopterists' news, 27: 7 ., ..."],
'references': ['lytrosis unitaria , the common lytrosis moth, is a species of moth of the geometridae family. it is found in north america, including arkansas, georgia, iowa , massachusetts, and wisconsin. the wingspan is about 50 mm. the larvae feed on rosa, crataegus, amelanchier, acer, quercus and viburnum species.'],
'summary': {'text': ['lytrosis unitaria , the common lytrosis moth , is a species of moth of the geometridae family .',
'it is found in north america , including arkansas , georgia , iowa , massachusetts , new hampshire , new jersey , new york , north carolina , ohio , oklahoma , ontario , pennsylvania , south carolina , tennessee , texas , virginia , west virginia and wisconsin .',
'the wingspan is about 50 mm .',
'the larvae feed on rosa , crataegus , amelanchier , acer , quercus and viburnum species . '],
'topic': [29, 20, 9, 8]},
'target': 'lytrosis unitaria , the common lytrosis moth, is a species of moth of the geometridae family. it is found in north america, including arkansas, georgia, iowa , massachusetts, and wisconsin. the wingspan is about 50 mm. the larvae feed on rosa, crataegus, amelanchier, acer, quercus and viburnum species.',
'title': 'lytrosis unitaria'}
```
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
Nb of instances in train/valid/test are 50,938/2,855/2,831
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The data was split i.i.d., i.e. uniformly split into training, validation, and test datasets.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
Evaluation of models' performance on noisy (document, summary) pairs and long inputs.
Evaluate models' capabilities to generalise and mitigate biases.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
no
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
no
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
Capabilities to generalise, mitigate biases, factual correctness.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`annotations added`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
We provide topic labels for summary sentences.
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
#### Pointers to Resources
<!-- info: Getting started with in-depth research on the task. Add relevant pointers to resources that researchers can consult when they want to get started digging deeper into the task. -->
<!-- scope: microscope -->
- [Generating Wikipedia by Summarizing Long Sequences](https://arxiv.org/abs/1801.10198)
- [Generating Summaries with Topic Templates and Structured Convolutional Decoders](https://arxiv.org/abs/1906.04687)
- [Noisy Self-Knowledge Distillation for Text Summarization](https://arxiv.org/abs/2009.07032)
And all references in these papers.
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Capabilities to generalise, mitigate biases, factual correctness.
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`, `BERT-Score`, `MoverScore`, `Other: Other Metrics`
#### Other Metrics
<!-- info: Definitions of other metrics -->
<!-- scope: periscope -->
- Abstract/Copy
- Factual accuracy based on the score of (Goodrich et al., 2019) and the relation extraction system of (Sorokin and Gurevych, 2017).
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
Human based are Question Answering and Ranking (Content, Fluency and Repetition)
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
yes
#### Other Evaluation Approaches
<!-- info: What evaluation approaches have others used? -->
<!-- scope: periscope -->
Those listed above.
#### Relevant Previous Results
<!-- info: What are the most relevant previous results for this task/dataset? -->
<!-- scope: microscope -->
Generating Summaries with Topic Templates and Structured Convolutional Decoders
https://arxiv.org/abs/1906.04687
Noisy Self-Knowledge Distillation for Text Summarization
https://arxiv.org/abs/2009.07032
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The dataset is a subset of the WikiSum (Liu et al., 2018) dataset focusing on summaries of entities in three domains (Film, Company, and Animal). It is multi-document summarisation where input-output pairs for each example entity are created as follows. The input is a set of paragraphs collected from i) documents in the Reference section of the entity's Wikipedia page plus ii) documents collected from the top ten search results after querying Google search engine with the entity name. The output summary is the Wikipedia abstract for the entity.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
Generate descriptive summaries with specific domains, where certain topics are discussed and generally in specific orders.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
yes
#### Source Details
<!-- info: List the sources (one per line) -->
<!-- scope: periscope -->
WikiSum (Liu et al., 2018)
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Other`
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
The dataset and task focuses on summaries for entities in three domains: Company, Film, and Animal.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Data Preprocessing
<!-- info: How was the text data pre-processed? (Enter N/A if the text was not pre-processed) -->
<!-- scope: microscope -->
Summary sentences are associated with a topic label. There is a topic model for each domain.
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
automatically created
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
#### Annotation Values
<!-- info: Purpose and values for each annotation -->
<!-- scope: microscope -->
Each summary sentences was annotated with a topic label. There is a topic model for each of the three domains. This was used to guide a hierarchical decoder.
#### Any Quality Control?
<!-- info: Quality control measures? -->
<!-- scope: telescope -->
validated by data curators
#### Quality Control Details
<!-- info: Describe the quality control measures that were taken. -->
<!-- scope: microscope -->
Manual inspection of a sample of topics assigned to sentences. The number of topics was selected based on the performance of the summarisation model.
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
no
#### Justification for Using the Data
<!-- info: If not, what is the justification for reusing the data? -->
<!-- scope: microscope -->
The dataset is base on Wikipedia and referenced and retrieved documents crawled from the Web.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
unlikely
#### Any PII Identification?
<!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? -->
<!-- scope: periscope -->
no identification
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
no
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
yes
#### Links and Summaries of Analysis Work
<!-- info: Provide links to and summaries of works analyzing these biases. -->
<!-- scope: microscope -->
This dataset is based on Wikipedia and thus biases analysis on other Wikipedia-based datasets are potentially true for WikiCatSum. For instance, see analysis for the ToTTo dataset here [1].
[1] Automatic Construction of Evaluation Suites for Natural Language Generation Datasets
https://openreview.net/forum?id=CSi1eu_2q96
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`public domain`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`public domain`
### Known Technical Limitations
|
imvladikon/hebrew_news | 2022-07-09T19:53:05.000Z | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:he",
"license:other",
"region:us"
] | imvladikon | null | 1 | 77 | ---
annotations_creators:
- no-annotation
language_creators:
- other
language:
- he
license:
- other
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
---
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
```
id - article id
articleBody - article main content
description - short version of the article, description of the article
headline - headline of the article
title - title of the article
```
| ||
bond005/sberdevices_golos_10h_crowd | 2022-10-27T04:42:07.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"source_datasets:extended",
"language:ru",
"license:other",
"arxiv:2106.10161",
"region:us"
] | bond005 | null | null | null | 0 | 77 | ---
pretty_name: Golos
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- ru
license:
- other
multilinguality:
- monolingual
paperswithcode_id: golos
size_categories:
- 10K<n<100k
source_datasets:
- extended
task_categories:
- automatic-speech-recognition
- audio-classification
---
# Dataset Card for sberdevices_golos_10h_crowd
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Golos ASR corpus](https://www.openslr.org/114)
- **Repository:** [Golos dataset](https://github.com/sberdevices/golos)
- **Paper:** [Golos: Russian Dataset for Speech Research](https://arxiv.org/pdf/2106.10161.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Nikolay Karpov](mailto:karpnv@gmail.com)
### Dataset Summary
Sberdevices Golos is a corpus of approximately 1200 hours of 16kHz Russian speech from crowd (reading speech) and farfield (communication with smart devices) domains, prepared by SberDevices Team (Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov). The data is derived from the crowd-sourcing platform, and has been manually annotated.
Authors divide all dataset into train and test subsets. The training subset includes approximately 1000 hours. For experiments with a limited number of records, authors identified training subsets of shorter length: 100 hours, 10 hours, 1 hour, 10 minutes.
This dataset is a simpler version of the above mentioned Golos:
- it includes the crowd domain only (without any sound from the farfield domain);
- validation split is built on the 1-hour training subset;
- training split corresponds to the 10-hour training subset without sounds from the 1-hour training subset;
- test split is a full original test split.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER.
### Languages
The audio is in Russian.
## Dataset Structure
### Data Instances
A typical data point comprises the audio data, usually called `audio` and its transcription, called `transcription`. Any additional information about the speaker and the passage which contains the transcription is not provided.
```
{'audio': {'path': None,
'array': array([ 3.05175781e-05, 3.05175781e-05, 0.00000000e+00, ...,
-1.09863281e-03, -7.93457031e-04, -1.52587891e-04]), dtype=float64),
'sampling_rate': 16000},
'transcription': 'шестнадцатая часть сезона пять сериала лемони сникет тридцать три несчастья'}
```
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- transcription: the transcription of the audio file.
### Data Splits
This dataset is a simpler version of the original Golos:
- it includes the crowd domain only (without any sound from the farfield domain);
- validation split is built on the 1-hour training subset;
- training split corresponds to the 10-hour training subset without sounds from the 1-hour training subset;
- test split is a full original test split.
| | Train | Validation | Test |
| ----- | ------ | ---------- | ----- |
| examples | 7993 | 793 | 9994 |
| hours | 8.9h | 0.9h | 11.2h |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
All recorded audio files were manually annotated on the crowd-sourcing platform.
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov.
### Licensing Information
[Public license with attribution and conditions reserved](https://github.com/sberdevices/golos/blob/master/license/en_us.pdf)
### Citation Information
```
@misc{karpov2021golos,
author = {Karpov, Nikolay and Denisenko, Alexander and Minkin, Fedor},
title = {Golos: Russian Dataset for Speech Research},
publisher = {arXiv},
year = {2021},
url = {https://arxiv.org/abs/2106.10161}
}
```
### Contributions
Thanks to [@bond005](https://github.com/bond005) for adding this dataset.
|
Cohere/miracl-zh-queries-22-12 | 2023-02-06T11:55:33.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:zh",
"license:apache-2.0",
"region:us"
] | Cohere | null | null | null | 21 | 77 | ---
annotations_creators:
- expert-generated
language:
- zh
multilinguality:
- multilingual
size_categories: []
source_datasets: []
tags: []
task_categories:
- text-retrieval
license:
- apache-2.0
task_ids:
- document-retrieval
---
# MIRACL (zh) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-zh-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-zh-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-corpus-22-12).
For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
Dataset info:
> MIRACL 🌍🙌🌏 (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
>
> The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage.
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Loading the dataset
In [miracl-zh-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train", streaming=True)
for doc in docs:
docid = doc['docid']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
Have a look at [miracl-zh-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-zh-queries-22-12) where we provide the query embeddings for the MIRACL dataset.
To search in the documents, you must use **dot-product**.
And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product.
A full search example:
```python
# Attention! For large datasets, this requires a lot of memory to store
# all document embeddings and to compute the dot product scores.
# Only use this for smaller datasets. For large datasets, use a vector DB
from datasets import load_dataset
import torch
#Load documents + embeddings
docs = load_dataset(f"Cohere/miracl-zh-corpus-22-12", split="train")
doc_embeddings = torch.tensor(docs['emb'])
# Load queries
queries = load_dataset(f"Cohere/miracl-zh-queries-22-12", split="dev")
# Select the first query as example
qid = 0
query = queries[qid]
query_embedding = torch.tensor(queries['emb'])
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query['query'])
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'])
```
You can get embeddings for new queries using our API:
```python
#Run: pip install cohere
import cohere
co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :))
texts = ['my search query']
response = co.embed(texts=texts, model='multilingual-22-12')
query_embedding = response.embeddings[0] # Get the embedding for the first text
```
## Performance
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|---|---|---|---|---|
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
Further languages (not supported by Elasticsearch):
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|---|---|---|
| miracl-fa | 44.8 | 53.6 |
| miracl-ja | 49.0 | 61.0 |
| miracl-ko | 50.9 | 64.8 |
| miracl-sw | 61.4 | 74.5 |
| miracl-te | 67.8 | 72.3 |
| miracl-th | 60.2 | 71.9 |
| miracl-yo | 56.4 | 62.2 |
| miracl-zh | 43.8 | 56.5 |
| **Avg** | 54.3 | 64.6 |
|
dipesh/Intent-Classification-large | 2023-02-04T22:18:08.000Z | [
"region:us"
] | dipesh | null | null | null | 2 | 77 | ---
dataset_info:
features:
- name: text
dtype: string
- name: intent
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: label
dtype:
class_label:
names:
'0': others
'1': places near me
'2': send whatsapp message
'3': greet and hello hi kind of things, general check in
'4': play games
'5': tell me news
'6': covid cases
'7': tell me about
'8': volume control
'9': open website
'10': play on youtube
'11': tell me joke
'12': send email
'13': goodbye
'14': take screenshot
'15': download youtube video
'16': asking weather
'17': asking date
'18': asking time
'19': i am bored
'20': click photo
'21': what can you do
splits:
- name: train
num_bytes: 1594125
num_examples: 15311
- name: validation
num_bytes: 175519
num_examples: 1702
download_size: 677155
dataset_size: 1769644
---
# Dataset Card for "Intent-Classification-large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mstz/student_performance | 2023-04-07T14:54:45.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"student performance",
"tabular_classification",
"binary_classification",
"region:us"
] | mstz | null | null | null | 0 | 77 | ---
language:
- en
tags:
- student performance
- tabular_classification
- binary_classification
pretty_name: Student Performance
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- encoding
- math
- writing
- reading
license: cc
---
# Student performance
The [Student performance dataset](https://www.kaggle.com/datasets/ulrikthygepedersen/student_performances) from Kaggle.
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|-----------------------------------------------------------------|
| encoding | | Encoding dictionary showing original values of encoded features.|
| math | Binary classification | Has the student passed the math exam? |
| writing | Binary classification | Has the student passed the writing exam? |
| reading | Binary classification | Has the student passed the reading exam? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/student_performance", "math")["train"]
```
# Features
|**Feature** |**Type** |
|-----------------------------------|-----------|
|`is_male` |`bool` |
|`ethnicity` |`string` |
|`parental_level_of_education` |`int8` |
|`has_standard_lunch` |`bool` |
|`has_completed_preparation_test` |`bool` |
|`reading_score` |`int64` |
|`writing_score` |`int64` |
|`math_score` |`int64` | |
Francesco/avatar-recognition-nuexe | 2023-03-30T09:30:13.000Z | [
"task_categories:object-detection",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc",
"rf100",
"region:us"
] | Francesco | null | null | null | 0 | 77 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: image
dtype: image
- name: width
dtype: int32
- name: height
dtype: int32
- name: objects
sequence:
- name: id
dtype: int64
- name: area
dtype: int64
- name: bbox
sequence: float32
length: 4
- name: category
dtype:
class_label:
names:
'0': avatar
'1': Character
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- cc
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- object-detection
task_ids: []
pretty_name: avatar-recognition-nuexe
tags:
- rf100
---
# Dataset Card for avatar-recognition-nuexe
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/avatar-recognition-nuexe
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
avatar-recognition-nuexe
### Supported Tasks and Leaderboards
- `object-detection`: The dataset can be used to train a model for Object Detection.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its object annotations.
```
{
'image_id': 15,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=640x640 at 0x2373B065C18>,
'width': 964043,
'height': 640,
'objects': {
'id': [114, 115, 116, 117],
'area': [3796, 1596, 152768, 81002],
'bbox': [
[302.0, 109.0, 73.0, 52.0],
[810.0, 100.0, 57.0, 28.0],
[160.0, 31.0, 248.0, 616.0],
[741.0, 68.0, 202.0, 401.0]
],
'category': [4, 4, 0, 0]
}
}
```
### Data Fields
- `image`: the image id
- `image`: `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `width`: the image width
- `height`: the image height
- `objects`: a dictionary containing bounding box metadata for the objects present on the image
- `id`: the annotation id
- `area`: the area of the bounding box
- `bbox`: the object's bounding box (in the [coco](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#coco) format)
- `category`: the object's category.
#### Who are the annotators?
Annotators are Roboflow users
## Additional Information
### Licensing Information
See original homepage https://universe.roboflow.com/object-detection/avatar-recognition-nuexe
### Citation Information
```
@misc{ avatar-recognition-nuexe,
title = { avatar recognition nuexe Dataset },
type = { Open Source Dataset },
author = { Roboflow 100 },
howpublished = { \url{ https://universe.roboflow.com/object-detection/avatar-recognition-nuexe } },
url = { https://universe.roboflow.com/object-detection/avatar-recognition-nuexe },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { nov },
note = { visited on 2023-03-29 },
}"
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. |
jensjorisdecorte/Synthetic-ESCO-skill-sentences | 2023-07-25T21:40:31.000Z | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:cc",
"Skill Extraction",
"Synthetic Data",
"arxiv:2307.10778",
"region:us"
] | jensjorisdecorte | null | null | null | 2 | 77 | ---
license: cc
task_categories:
- text-classification
language:
- en
tags:
- Skill Extraction
- Synthetic Data
pretty_name: Synthetic ESCO skill sentences
size_categories:
- 100K<n<1M
---
# Synthetic job ads for all ESCO skills
## Dataset Description
- **Homepage:** coming soon
- **Repository:** coming soon
- **Paper:** https://arxiv.org/abs/2307.10778
- **Point of Contact:** jensjoris@techwolf.ai
### Dataset Summary
This dataset contains 10 synthetically generated job ad sentences for almost all (99.5%) skills in ESCO v1.1.0.
### Languages
We use the English version of ESCO, and all generated sentences are in English.
## Dataset Structure
The dataset consists of 138,260 `(sentence, skill)` pairs.
### Citation Information
[More Information Needed] |
tingchih/multi-class | 2023-09-12T04:21:02.000Z | [
"region:us"
] | tingchih | null | null | null | 0 | 77 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 98926083
num_examples: 570999
- name: test
num_bytes: 42106324
num_examples: 245116
download_size: 77717077
dataset_size: 141032407
---
# Dataset Card for "multi-class"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Tadorne/amendments | 2023-09-18T10:24:18.000Z | [
"language:en",
"license:eupl-1.1",
"region:us"
] | Tadorne | null | null | null | 0 | 77 | ---
language:
- en
license: eupl-1.1
pretty_name: Amendments EP - Legislature 7 & 8
configs:
- config_name: ALDE
data_files: alde.jsonl.gz
- config_name: ECR
data_files: ecr.jsonl.gz
- config_name: EFD
data_files: efd.jsonl.gz
- config_name: ENF
data_files: enf.jsonl.gz
- config_name: EPP
data_files: epp.jsonl.gz
- config_name: EUL
data_files: eul.jsonl.gz
- config_name: GEFA
data_files: gefa.jsonl.gz
- config_name: ID
data_files: id.jsonl.gz
- config_name: NA
data_files: na.jsonl.gz
- config_name: RENEW
data_files: renew.jsonl.gz
- config_name: SD
data_files: sd.jsonl.gz
---
# 🇪🇺 🗳️ European Parliament Amendments : Legislature 7 & 8
Source: https://zenodo.org/record/3757714
|
manu/french_librispeech_text_only | 2023-09-19T10:21:43.000Z | [
"region:us"
] | manu | null | null | null | 0 | 77 | ---
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 62120933
num_examples: 258213
download_size: 37959942
dataset_size: 62120933
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "french_librispeech_text_only"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
holistic-ai/LLM_Audit_Toxicity_Prompts | 2023-09-27T17:29:07.000Z | [
"license:mit",
"region:us"
] | holistic-ai | null | null | null | 0 | 77 | ---
license: mit
---
|
cfq | 2023-04-05T09:42:18.000Z | [
"task_categories:question-answering",
"task_categories:other",
"task_ids:open-domain-qa",
"task_ids:closed-domain-qa",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"compositionality",
"arxiv:1912.09713",
"region:us"
] | null | The CFQ dataset (and it's splits) for measuring compositional generalization.
See https://arxiv.org/abs/1912.09713.pdf for background.
Example usage:
data = datasets.load_dataset('cfq/mcd1') | @inproceedings{Keysers2020,
title={Measuring Compositional Generalization: A Comprehensive Method on
Realistic Data},
author={Daniel Keysers and Nathanael Sch\"{a}rli and Nathan Scales and
Hylke Buisman and Daniel Furrer and Sergii Kashubin and
Nikola Momchev and Danila Sinopalnikov and Lukasz Stafiniak and
Tibor Tihon and Dmitry Tsarkov and Xiao Wang and Marc van Zee and
Olivier Bousquet},
booktitle={ICLR},
year={2020},
url={https://arxiv.org/abs/1912.09713.pdf},
} | null | 2 | 76 | ---
annotations_creators:
- no-annotation
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Compositional Freebase Questions
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
- other
task_ids:
- open-domain-qa
- closed-domain-qa
paperswithcode_id: cfq
tags:
- compositionality
dataset_info:
- config_name: mcd1
features:
- name: question
dtype: string
- name: query
dtype: string
splits:
- name: train
num_bytes: 37408806
num_examples: 95743
- name: test
num_bytes: 5446503
num_examples: 11968
download_size: 267599061
dataset_size: 42855309
- config_name: mcd2
features:
- name: question
dtype: string
- name: query
dtype: string
splits:
- name: train
num_bytes: 39424657
num_examples: 95743
- name: test
num_bytes: 5314019
num_examples: 11968
download_size: 267599061
dataset_size: 44738676
- config_name: mcd3
features:
- name: question
dtype: string
- name: query
dtype: string
splits:
- name: train
num_bytes: 38316345
num_examples: 95743
- name: test
num_bytes: 5244503
num_examples: 11968
download_size: 267599061
dataset_size: 43560848
- config_name: question_complexity_split
features:
- name: question
dtype: string
- name: query
dtype: string
splits:
- name: train
num_bytes: 39989433
num_examples: 98999
- name: test
num_bytes: 5781561
num_examples: 10340
download_size: 267599061
dataset_size: 45770994
- config_name: question_pattern_split
features:
- name: question
dtype: string
- name: query
dtype: string
splits:
- name: train
num_bytes: 41217350
num_examples: 95654
- name: test
num_bytes: 5179936
num_examples: 11909
download_size: 267599061
dataset_size: 46397286
- config_name: query_complexity_split
features:
- name: question
dtype: string
- name: query
dtype: string
splits:
- name: train
num_bytes: 40270175
num_examples: 100654
- name: test
num_bytes: 5634924
num_examples: 9512
download_size: 267599061
dataset_size: 45905099
- config_name: query_pattern_split
features:
- name: question
dtype: string
- name: query
dtype: string
splits:
- name: train
num_bytes: 40811284
num_examples: 94600
- name: test
num_bytes: 5268358
num_examples: 12589
download_size: 267599061
dataset_size: 46079642
- config_name: random_split
features:
- name: question
dtype: string
- name: query
dtype: string
splits:
- name: train
num_bytes: 41279218
num_examples: 95744
- name: test
num_bytes: 5164923
num_examples: 11967
download_size: 267599061
dataset_size: 46444141
---
# Dataset Card for "cfq"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/google-research/google-research/tree/master/cfq](https://github.com/google-research/google-research/tree/master/cfq)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** https://arxiv.org/abs/1912.09713
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 2.14 GB
- **Size of the generated dataset:** 362.07 MB
- **Total amount of disk used:** 2.50 GB
### Dataset Summary
The Compositional Freebase Questions (CFQ) is a dataset that is specifically designed to measure compositional
generalization. CFQ is a simple yet realistic, large dataset of natural language questions and answers that also
provides for each question a corresponding SPARQL query against the Freebase knowledge base. This means that CFQ can
also be used for semantic parsing.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
English (`en`).
## Dataset Structure
### Data Instances
#### mcd1
- **Size of downloaded dataset files:** 267.60 MB
- **Size of the generated dataset:** 42.90 MB
- **Total amount of disk used:** 310.49 MB
An example of 'train' looks as follows.
```
{
'query': 'SELECT count(*) WHERE {\n?x0 a ns:people.person .\n?x0 ns:influence.influence_node.influenced M1 .\n?x0 ns:influence.influence_node.influenced M2 .\n?x0 ns:people.person.spouse_s/ns:people.marriage.spouse|ns:fictional_universe.fictional_character.married_to/ns:fictional_universe.marriage_of_fictional_characters.spouses ?x1 .\n?x1 a ns:film.cinematographer .\nFILTER ( ?x0 != ?x1 )\n}',
'question': 'Did a person marry a cinematographer , influence M1 , and influence M2'
}
```
#### mcd2
- **Size of downloaded dataset files:** 267.60 MB
- **Size of the generated dataset:** 44.77 MB
- **Total amount of disk used:** 312.38 MB
An example of 'train' looks as follows.
```
{
'query': 'SELECT count(*) WHERE {\n?x0 ns:people.person.parents|ns:fictional_universe.fictional_character.parents|ns:organization.organization.parent/ns:organization.organization_relationship.parent ?x1 .\n?x1 a ns:people.person .\nM1 ns:business.employer.employees/ns:business.employment_tenure.person ?x0 .\nM1 ns:business.employer.employees/ns:business.employment_tenure.person M2 .\nM1 ns:business.employer.employees/ns:business.employment_tenure.person M3 .\nM1 ns:business.employer.employees/ns:business.employment_tenure.person M4 .\nM5 ns:business.employer.employees/ns:business.employment_tenure.person ?x0 .\nM5 ns:business.employer.employees/ns:business.employment_tenure.person M2 .\nM5 ns:business.employer.employees/ns:business.employment_tenure.person M3 .\nM5 ns:business.employer.employees/ns:business.employment_tenure.person M4\n}',
'question': "Did M1 and M5 employ M2 , M3 , and M4 and employ a person 's child"
}
```
#### mcd3
- **Size of downloaded dataset files:** 267.60 MB
- **Size of the generated dataset:** 43.60 MB
- **Total amount of disk used:** 311.20 MB
An example of 'train' looks as follows.
```
{
"query": "SELECT /producer M0 . /director M0 . ",
"question": "Who produced and directed M0?"
}
```
#### query_complexity_split
- **Size of downloaded dataset files:** 267.60 MB
- **Size of the generated dataset:** 45.95 MB
- **Total amount of disk used:** 313.55 MB
An example of 'train' looks as follows.
```
{
"query": "SELECT /producer M0 . /director M0 . ",
"question": "Who produced and directed M0?"
}
```
#### query_pattern_split
- **Size of downloaded dataset files:** 267.60 MB
- **Size of the generated dataset:** 46.12 MB
- **Total amount of disk used:** 313.72 MB
An example of 'train' looks as follows.
```
{
"query": "SELECT /producer M0 . /director M0 . ",
"question": "Who produced and directed M0?"
}
```
### Data Fields
The data fields are the same among all splits and configurations:
- `question`: a `string` feature.
- `query`: a `string` feature.
### Data Splits
| name | train | test |
|---------------------------|-------:|------:|
| mcd1 | 95743 | 11968 |
| mcd2 | 95743 | 11968 |
| mcd3 | 95743 | 11968 |
| query_complexity_split | 100654 | 9512 |
| query_pattern_split | 94600 | 12589 |
| question_complexity_split | 98999 | 10340 |
| question_pattern_split | 95654 | 11909 |
| random_split | 95744 | 11967 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{Keysers2020,
title={Measuring Compositional Generalization: A Comprehensive Method on
Realistic Data},
author={Daniel Keysers and Nathanael Sch"{a}rli and Nathan Scales and
Hylke Buisman and Daniel Furrer and Sergii Kashubin and
Nikola Momchev and Danila Sinopalnikov and Lukasz Stafiniak and
Tibor Tihon and Dmitry Tsarkov and Xiao Wang and Marc van Zee and
Olivier Bousquet},
booktitle={ICLR},
year={2020},
url={https://arxiv.org/abs/1912.09713.pdf},
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@brainshawn](https://github.com/brainshawn) for adding this dataset. |
nid989/FNC-1 | 2021-12-27T11:04:06.000Z | [
"region:us"
] | nid989 | null | null | null | 2 | 76 | ### Dataset Summary
The data provided is (headline, body, stance) instances, where the stance is one of {unrelated, discuss, agree, disagree}.
**Input**
* A headline and a body text - either from the same news article or from two different articles.
**Output**
* Classify the stance of the body text relative to the claim made in the headline into one of four categories:
* Agrees: The body text agrees with the headline.
* Disagrees: The body text disagrees with the headline.
* Discusses: The body text discuss the same topic as the headline, but does not take a position
* Unrelated: The body text discusses a different topic than the headline
The distribution of Stance classes in the entire dataset is as follows:
| rows | unrelated | discuss | agree | disagree |
|---------|-----------|---------|-----------|----------- |
| 49972 | 0.73131 | 0.17828 | 0.0736012 | 0.016809 |
### Source Data
[FNC-1 Official webpage.](http://www.fakenewschallenge.org/)
- annotations_creators: found
- language_creators: found
- languages: en-US
- licenses: apache-2.0
- multilingualism: monolingual
- pretty_name: FNC-1
- size_categories: unknown
- source_datasets: original
- task_categories:text-classification
- task_ids
- multi-class-classification
- natural-language-inference
- multi-label-classification
- intent-classification |
scikit-learn/iris | 2022-06-20T14:17:01.000Z | [
"license:cc0-1.0",
"region:us"
] | scikit-learn | null | null | null | 0 | 76 | ---
license: cc0-1.0
---
## Iris Species Dataset
The Iris dataset was used in R.A. Fisher's classic 1936 paper, The Use of Multiple Measurements in Taxonomic Problems, and can also be found on the UCI Machine Learning Repository.
It includes three iris species with 50 samples each as well as some properties about each flower. One flower species is linearly separable from the other two, but the other two are not linearly separable from each other.
The dataset is taken from [UCI Machine Learning Repository's Kaggle](https://www.kaggle.com/datasets/uciml/iris).
The following description is taken from UCI Machine Learning Repository.
This is perhaps the best known database to be found in the pattern recognition literature. Fisher's paper is a classic in the field and is referenced frequently to this day. (See Duda & Hart, for example.) The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. One class is linearly separable from the other 2; the latter are NOT linearly separable from each other.
Predicted attribute: class of iris plant.
This is an exceedingly simple domain.
This data differs from the data presented in Fishers article (identified by Steve Chadwick, spchadwick '@' espeedaz.net ). The 35th sample should be: 4.9,3.1,1.5,0.2,"Iris-setosa" where the error is in the fourth feature. The 38th sample: 4.9,3.6,1.4,0.1,"Iris-setosa" where the errors are in the second and third features.
Features in this dataset are the following:
- sepal length in cm
- sepal width in cm
- petal length in cm
- petal width in cm
- class:
- Iris Setosa
- Iris Versicolour
- Iris Virginica
|
domenicrosati/clinical_trial_texts | 2022-12-05T17:34:13.000Z | [
"region:us"
] | domenicrosati | null | null | null | 3 | 76 | ---
dataset_info:
features:
- name: text
dtype: string
- name: trial_id
dtype: string
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 22784316806
num_examples: 434977
download_size: 5376659326
dataset_size: 22784316806
---
# Dataset Card for "clinical_trial_texts"
These are the text of clinical trials dowloaded from https://ClinicalTrials.gov/AllAPIJSON.zip on Dec 3rd 2022.
Total trials is 434977
Number of tokens is 2,184,397,556 (2.1bn tokens).
The tokens here are from the default BERT tokenizer in hugginface.
This data can be used for pretraining in the clinical trial and biomedical domains.
If you use this data please acknowledge @domenicrosati and link to this dataset
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LightChen2333/OpenSLU | 2023-02-22T05:25:40.000Z | [
"license:mit",
"region:us"
] | LightChen2333 | xxx | xxx | null | 1 | 76 | ---
license: mit
---
|
datadrivenscience/ship-detection | 2023-03-02T16:09:14.000Z | [
"task_categories:object-detection",
"region:us"
] | datadrivenscience | null | null | null | 14 | 76 | ---
task_categories:
- object-detection
---
# Dataset Card for Ship Detection
Link to [Ship Detection Competition](https://huggingface.co/spaces/competitions/ship-detection)
By accepting this dataset, you accept the rules of the Ship Detection competition.
# Organizer
Organizer of this competition is [Data-Driven Science](https://datadrivenscience.com/).
<img src="https://datadrivenscience.com/wp-content/uploads/2022/12/DDS-Logo.png" width="200" height="100">
# Email Usage
By accepting this dataset, you consent that your email will be used for communication purposes from Data-Driven Science.
We do not share nor sell our mailing list. Your information remains confidential. You may unsubscribe at any time.
|
gbharti/wealth-alpaca_lora | 2023-04-07T09:12:57.000Z | [
"language:en",
"region:us"
] | gbharti | null | null | null | 12 | 76 | ---
language:
- en
---
This dataset is a combination of Stanford's Alpaca (https://github.com/tatsu-lab/stanford_alpaca) and FiQA (https://sites.google.com/view/fiqa/) with another 1.3k pairs custom generated using GPT3.5
Script for tuning through Kaggle's (https://www.kaggle.com) free resources using PEFT/LoRa: https://www.kaggle.com/code/gbhacker23/wealth-alpaca-lora |
mstz/shuttle | 2023-04-16T17:58:41.000Z | [
"task_categories:tabular-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc",
"shuttle",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_statlog_(shuttle)_148,
title = {{Statlog (Shuttle)}},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5WS31}}
} | null | 0 | 76 | ---
language:
- en
tags:
- shuttle
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Shuttle
size_categories:
- 10K<n<100K
task_categories:
- tabular-classification
configs:
- shuttle
- shuttle_binary
license: cc
---
# Shuttle
The [Shuttle dataset](https://archive-beta.ics.uci.edu/dataset/146/statlog+shuttle+satellite) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| shuttle | Multiclass classification.| |
| shuttle_0 | Binary classification. | Is the image of class 0? |
| shuttle_1 | Binary classification. | Is the image of class 1? |
| shuttle_2 | Binary classification. | Is the image of class 2? |
| shuttle_3 | Binary classification. | Is the image of class 3? |
| shuttle_4 | Binary classification. | Is the image of class 4? |
| shuttle_5 | Binary classification. | Is the image of class 5? |
| shuttle_6 | Binary classification. | Is the image of class 6? | |
mstz/steel_plates | 2023-04-16T18:03:07.000Z | [
"task_categories:tabular-classification",
"size_categories:1K<n<5K",
"language:en",
"license:cc",
"steel_plates",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_steel_plates_faults_198,
author = {Buscema,M, Terzi,S & Tastle,W},
title = {{Steel Plates Faults}},
year = {2010},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5J88N}}
} | null | 1 | 76 | ---
language:
- en
tags:
- steel_plates
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Landsat
size_categories:
- 1K<n<5K
task_categories:
- tabular-classification
configs:
- steel_plates
- steel_plates_0
- steel_plates_1
- steel_plates_2
- steel_plates_3
- steel_plates_4
- steel_plates_5
- steel_plates_6
license: cc
---
# Landsat
The [Steel Plates dataset](https://archive-beta.ics.uci.edu/dataset/198/steel+plates+faults) from the [UCI repository](https://archive-beta.ics.uci.edu/).
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-----------------------|---------------------------|-------------------------|
| steel_plates | Multiclass classification.| |
| steel_plates_0 | Binary classification. | Is the input of class 0? |
| steel_plates_1 | Binary classification. | Is the input of class 1? |
| steel_plates_2 | Binary classification. | Is the input of class 2? |
| steel_plates_3 | Binary classification. | Is the input of class 3? |
| steel_plates_4 | Binary classification. | Is the input of class 4? |
| steel_plates_5 | Binary classification. | Is the input of class 5? |
| steel_plates_6 | Binary classification. | Is the input of class 6? | |
liuhaotian/LLaVA-CC3M-Pretrain-595K | 2023-07-06T08:51:35.000Z | [
"language:en",
"license:other",
"region:us"
] | liuhaotian | null | null | null | 36 | 76 | ---
license: other
language:
- en
pretty_name: LLaVA CC3M Pretrain 595K
---
# LLaVA Visual Instruct CC3M 595K Pretrain Dataset Card
## Dataset details
**Dataset type:**
LLaVA Visual Instruct CC3M Pretrain 595K is a subset of CC-3M dataset, filtered with a more balanced concept coverage distribution.
Captions are also associated with [BLIP synthetic caption](https://github.com/salesforce/BLIP#pre-training-datasets-download) for reference.
It is constructed for the pretraining stage for feature alignment in visual instruction tuning.
We aim to build large multimodal towards GPT-4 vision/language capability.
**Dataset date:**
LLaVA Visual Instruct CC3M Pretrain 595K was created in April 2023.
**Dataset structure:**
- `chat.json` contains the multimodal synthesized conversation from the image-caption pairs, by adding randomly selected instructions like: "Describe this image". It is used for pretraining in LLaVA. We use the raw CC-3M caption as the default answer.
- `metadata.json` contains the meta data of the image index in CC-3M, image file name, image URL, original CC-3M caption, synthetic BLIP caption. Note that ~10% of the samples are not associated with BLIP caption yet in this release.
- `images.zip` contains all raw images of the filtered subset from CC-3M. **Important notice: Upon the request from the community, as ~15% images of the original CC-3M dataset are no longer accessible, we upload `images.zip` for better reproducing our work in research community. It should not be used for any other purpose. The use of these images must comply with the CC-3M license. This may be taken down when requested by the original CC-3M dataset owner or owners of the referenced images.**
**Paper or resources for more information:**
https://llava-vl.github.io/
**License:**
Must comply with license of [CC-3M](https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE), [BLIP](https://github.com/salesforce/BLIP/blob/main/LICENSE.txt) (if you use their synthetic caption).
CC-3M
The dataset may be freely used for any purpose, although acknowledgement of
Google LLC ("Google") as the data source would be appreciated. The dataset is
provided "AS IS" without any warranty, express or implied. Google disclaims all
liability for any damages, direct or indirect, resulting from the use of the
dataset.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. |
wtcherr/LAION10K | 2023-06-05T15:19:52.000Z | [
"region:us"
] | wtcherr | null | null | null | 0 | 76 | ---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 171092053.43
num_examples: 8226
download_size: 170695611
dataset_size: 171092053.43
---
# Dataset Card for "LAION10K"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mattymchen/refinedweb-3m | 2023-06-12T06:01:04.000Z | [
"region:us"
] | mattymchen | null | null | null | 2 | 76 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 7834920949
num_examples: 3000000
download_size: 4904877808
dataset_size: 7834920949
---
# Dataset Card for "refinedweb-3m"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
C-MTEB/TNews-classification | 2023-07-28T13:31:30.000Z | [
"region:us"
] | C-MTEB | null | null | null | 0 | 76 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '100'
'1': '101'
'2': '102'
'3': '103'
'4': '104'
'5': '106'
'6': '107'
'7': '108'
'8': '109'
'9': '110'
'10': '112'
'11': '113'
'12': '114'
'13': '115'
'14': '116'
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 810970
num_examples: 10000
- name: train
num_bytes: 4245677
num_examples: 53360
- name: validation
num_bytes: 797922
num_examples: 10000
download_size: 4697191
dataset_size: 5854569
---
# Dataset Card for "TNews-classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Riyazmk/mentalhealth | 2023-08-03T11:11:45.000Z | [
"license:other",
"region:us"
] | Riyazmk | null | null | null | 1 | 76 | ---
license: other
---
|
imoxto/prompt_injection_cleaned_dataset-v2 | 2023-08-08T09:30:19.000Z | [
"region:us"
] | imoxto | null | null | null | 1 | 76 | ---
dataset_info:
features:
- name: model
dtype: string
- name: text
dtype: string
- name: labels
dtype: int64
splits:
- name: train
num_bytes: 670958021
num_examples: 535105
download_size: 79246765
dataset_size: 670958021
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "prompt_injection_cleaned_dataset-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ashhadahsan/amazon_subtheme | 2023-10-02T17:29:54.000Z | [
"region:us"
] | ashhadahsan | null | null | null | 0 | 76 | ---
dataset_info:
features:
- name: Transcript
dtype: string
- name: Review Issue
dtype: string
splits:
- name: train
num_bytes: 301970
num_examples: 780
download_size: 0
dataset_size: 301970
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "amazon_subtheme"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
manishiitg/aditi-test | 2023-09-19T17:23:43.000Z | [
"region:us"
] | manishiitg | null | null | null | 0 | 76 | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: english
dtype: string
- name: chat_id
dtype: string
- name: session_id
dtype: string
- name: ts
dtype: int64
- name: classify_response
dtype: string
- name: meta_data
dtype: string
- name: type
dtype: string
- name: conversion
dtype: string
- name: has_conversation
dtype: bool
- name: intent
dtype: string
- name: category
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 2009477.9869809647
num_examples: 1030
download_size: 413940
dataset_size: 2009477.9869809647
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "aditi-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
manu/opus100-en-fr | 2023-09-18T16:15:31.000Z | [
"region:us"
] | manu | null | null | null | 0 | 76 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
splits:
- name: test
num_bytes: 479723
num_examples: 2000
- name: train
num_bytes: 206440450
num_examples: 1000000
- name: validation
num_bytes: 491476
num_examples: 2000
download_size: 148902270
dataset_size: 207411649
---
# Dataset Card for "opus100-en-fr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
shossain/govreport-summarization-tokenized | 2023-09-20T07:04:40.000Z | [
"region:us"
] | shossain | null | null | null | 0 | 76 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 69604
num_examples: 973
download_size: 22673
dataset_size: 69604
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "govreport-summarization-tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Amey-HH/polypdata | 2023-09-29T00:51:52.000Z | [
"region:us"
] | Amey-HH | null | null | null | 0 | 76 | Entry not found |
pavithrav/modified_dataset_emotion | 2023-10-02T12:45:26.000Z | [
"region:us"
] | pavithrav | null | null | null | 0 | 76 | Entry not found |
YaHi/english_AAAI_Math | 2023-10-09T21:06:27.000Z | [
"region:us"
] | YaHi | null | null | null | 0 | 76 | ---
dataset_info:
features:
- name: dataset_version
dtype: timestamp[s]
- name: queId
dtype: string
- name: difficulty
dtype: string
- name: qtype
dtype: string
- name: problem
dtype: string
- name: knowledge_point_routes
sequence: string
splits:
- name: train
num_bytes: 2228695
num_examples: 5927
download_size: 854269
dataset_size: 2228695
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "english_AAAI_Math"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hind_encorp | 2022-11-03T16:15:40.000Z | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:translation",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"language:hi",
"license:cc-by-nc-sa-3.0",
"region:us"
] | null | HindEnCorp parallel texts (sentence-aligned) come from the following sources:
Tides, which contains 50K sentence pairs taken mainly from news articles. This dataset was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008).
Commentaries by Daniel Pipes contain 322 articles in English written by a journalist Daniel Pipes and translated into Hindi.
EMILLE. This corpus (Baker et al., 2002) consists of three components: monolingual, parallel and annotated corpora. There are fourteen monolingual sub- corpora, including both written and (for some lan- guages) spoken data for fourteen South Asian lan- guages. The EMILLE monolingual corpora contain in total 92,799,000 words (including 2,627,000 words of transcribed spoken data for Bengali, Gujarati, Hindi, Punjabi and Urdu). The parallel corpus consists of 200,000 words of text in English and its accompanying translations into Hindi and other languages.
Smaller datasets as collected by Bojar et al. (2010) include the corpus used at ACL 2005 (a subcorpus of EMILLE), a corpus of named entities from Wikipedia (crawled in 2009), and Agriculture domain parallel corpus.

For the current release, we are extending the parallel corpus using these sources:
Intercorp (Čermák and Rosen,2012) is a large multilingual parallel corpus of 32 languages including Hindi. The central language used for alignment is Czech. Intercorp’s core texts amount to 202 million words. These core texts are most suitable for us because their sentence alignment is manually checked and therefore very reliable. They cover predominately short sto- ries and novels. There are seven Hindi texts in Inter- corp. Unfortunately, only for three of them the English translation is available; the other four are aligned only with Czech texts. The Hindi subcorpus of Intercorp contains 118,000 words in Hindi.
TED talks 3 held in various languages, primarily English, are equipped with transcripts and these are translated into 102 languages. There are 179 talks for which Hindi translation is available.
The Indic multi-parallel corpus (Birch et al., 2011; Post et al., 2012) is a corpus of texts from Wikipedia translated from the respective Indian language into English by non-expert translators hired over Mechanical Turk. The quality is thus somewhat mixed in many respects starting from typesetting and punctuation over capi- talization, spelling, word choice to sentence structure. A little bit of control could be in principle obtained from the fact that every input sentence was translated 4 times. We used the 2012 release of the corpus.
Launchpad.net is a software collaboration platform that hosts many open-source projects and facilitates also collaborative localization of the tools. We downloaded all revisions of all the hosted projects and extracted the localization (.po) files.
Other smaller datasets. This time, we added Wikipedia entities as crawled in 2013 (including any morphological variants of the named entitity that appears on the Hindi variant of the Wikipedia page) and words, word examples and quotes from the Shabdkosh online dictionary. | @InProceedings{hindencorp05:lrec:2014,
author = {Ond{\v{r}}ej Bojar and Vojt{\v{e}}ch Diatka
and Pavel Rychl{\'{y}} and Pavel Stra{\v{n}}{\'{a}}k
and V{\'{}}t Suchomel and Ale{\v{s}} Tamchyna and Daniel Zeman},
title = "{HindEnCorp - Hindi-English and Hindi-only Corpus for Machine
Translation}",
booktitle = {Proceedings of the Ninth International Conference on Language
Resources and Evaluation (LREC'14)},
year = {2014},
month = {may},
date = {26-31},
address = {Reykjavik, Iceland},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and
Thierry Declerck and Hrafn Loftsson and Bente Maegaard and Joseph Mariani
and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-8-4},
language = {english}
} | null | 1 | 75 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- machine-generated
language:
- en
- hi
license:
- cc-by-nc-sa-3.0
multilinguality:
- translation
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: hindencorp
pretty_name: HindEnCorp
dataset_info:
features:
- name: id
dtype: string
- name: source
dtype: string
- name: alignment_type
dtype: string
- name: alignment_quality
dtype: string
- name: translation
dtype:
translation:
languages:
- en
- hi
splits:
- name: train
num_bytes: 78945714
num_examples: 273885
download_size: 23899723
dataset_size: 78945714
---
# Dataset Card for HindEnCorp
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0023-625F-0
- **Repository:** https://lindat.mff.cuni.cz/repository/xmlui/
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2014/pdf/835_Paper.pdf
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
HindEnCorp parallel texts (sentence-aligned) come from the following sources:
Tides, which contains 50K sentence pairs taken mainly from news articles. This dataset was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008).
Commentaries by Daniel Pipes contain 322 articles in English written by a journalist Daniel Pipes and translated into Hindi.
EMILLE. This corpus (Baker et al., 2002) consists of three components: monolingual, parallel and annotated corpora. There are fourteen monolingual sub- corpora, including both written and (for some lan- guages) spoken data for fourteen South Asian lan- guages. The EMILLE monolingual corpora contain in total 92,799,000 words (including 2,627,000 words of transcribed spoken data for Bengali, Gujarati, Hindi, Punjabi and Urdu). The parallel corpus consists of 200,000 words of text in English and its accompanying translations into Hindi and other languages.
Smaller datasets as collected by Bojar et al. (2010) include the corpus used at ACL 2005 (a subcorpus of EMILLE), a corpus of named entities from Wikipedia (crawled in 2009), and Agriculture domain parallel corpus.

For the current release, we are extending the parallel corpus using these sources:
Intercorp (Čermák and Rosen,2012) is a large multilingual parallel corpus of 32 languages including Hindi. The central language used for alignment is Czech. Intercorp’s core texts amount to 202 million words. These core texts are most suitable for us because their sentence alignment is manually checked and therefore very reliable. They cover predominately short sto- ries and novels. There are seven Hindi texts in Inter- corp. Unfortunately, only for three of them the English translation is available; the other four are aligned only with Czech texts. The Hindi subcorpus of Intercorp contains 118,000 words in Hindi.
TED talks 3 held in various languages, primarily English, are equipped with transcripts and these are translated into 102 languages. There are 179 talks for which Hindi translation is available.
The Indic multi-parallel corpus (Birch et al., 2011; Post et al., 2012) is a corpus of texts from Wikipedia translated from the respective Indian language into English by non-expert translators hired over Mechanical Turk. The quality is thus somewhat mixed in many respects starting from typesetting and punctuation over capi- talization, spelling, word choice to sentence structure. A little bit of control could be in principle obtained from the fact that every input sentence was translated 4 times. We used the 2012 release of the corpus.
Launchpad.net is a software collaboration platform that hosts many open-source projects and facilitates also collaborative localization of the tools. We downloaded all revisions of all the hosted projects and extracted the localization (.po) files.
Other smaller datasets. This time, we added Wikipedia entities as crawled in 2013 (including any morphological variants of the named entitity that appears on the Hindi variant of the Wikipedia page) and words, word examples and quotes from the Shabdkosh online dictionary.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Hindi, English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
HindEncorp Columns:
- source identifier (where do the segments come from)
- alignment type (number of English segments - number of Hindi segments)
- alignment quality, which is one of the following:
"manual" ... for sources that were sentence-aligned manually
"implied" ... for sources where one side was constructed by translating
segment by segment
float ... a value somehow reflecting the goodness of the automatic
alignment; not really reliable
- English segment or segments
- Hindi segment or segments
Each of the segments field is in the plaintext or export format as described
above.
If there are more than one segments on a line (e.g. for lines with alignment
type 2-1 where there are two English segments), then the segments are delimited
with `<s>` in the text field.
### Data Splits
[More Information Needed]
## Dataset Creation
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Daniel Pipes,Baker,Bojar,"Čermák and Rosen,2012","Birch et al., 2011; Post et al., 2012"
### Annotations
#### Annotation process
the 1st part of data TIDES was originally col- lected for the DARPA-TIDES surprise-language con- test in 2002, later refined at IIIT Hyderabad and provided for the NLP Tools Contest at ICON 2008 (Venkatapathy, 2008).
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Dataset provided for research purposes only. Please check dataset license for additional information.
## Additional Information
### Dataset Curators
Bojar, Ondřej ; Diatka, Vojtěch ; Straňák, Pavel ; Tamchyna, Aleš ; Zeman, Daniel
### Licensing Information
CC BY-NC-SA 3.0
### Citation Information
@InProceedings{hindencorp05:lrec:2014,
author = {Ond{\v{r}}ej Bojar and Vojt{\v{e}}ch Diatka
and Pavel Rychl{\'{y}} and Pavel Stra{\v{n}}{\'{a}}k
and V{\'{\i}}t Suchomel and Ale{\v{s}} Tamchyna and Daniel Zeman},
title = "{HindEnCorp - Hindi-English and Hindi-only Corpus for Machine
Translation}",
booktitle = {Proceedings of the Ninth International Conference on Language
Resources and Evaluation (LREC'14)},
year = {2014},
month = {may},
date = {26-31},
address = {Reykjavik, Iceland},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and
Thierry Declerck and Hrafn Loftsson and Bente Maegaard and Joseph Mariani
and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-8-4},
language = {english}
}
### Contributions
Thanks to [@rahul-art](https://github.com/rahul-art) for adding this dataset. |
opus_gnome | 2023-06-01T14:59:53.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"size_categories:1K<n<10K",
"size_categories:n<1K",
"source_datasets:original",
"language:af",
"language:am",
"language:an",
"language:ang",
"language:ar",
"language:as",
"language:ast",
"language:az",
"language:bal",
"language:be",
"language:bem",
"language:bg",
"language:bn",
"language:bo",
"language:br",
"language:brx",
"language:bs",
"language:ca",
"language:crh",
"language:cs",
"language:csb",
"language:cy",
"language:da",
"language:de",
"language:dv",
"language:dz",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fo",
"language:fr",
"language:fur",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:gn",
"language:gu",
"language:gv",
"language:ha",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:ig",
"language:io",
"language:is",
"language:it",
"language:ja",
"language:jbo",
"language:ka",
"language:kg",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:kr",
"language:ks",
"language:ku",
"language:ky",
"language:la",
"language:lg",
"language:li",
"language:lo",
"language:lt",
"language:lv",
"language:mai",
"language:mg",
"language:mi",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:mus",
"language:my",
"language:nb",
"language:nds",
"language:ne",
"language:nhn",
"language:nl",
"language:nn",
"language:no",
"language:nqo",
"language:nr",
"language:nso",
"language:oc",
"language:or",
"language:os",
"language:pa",
"language:pl",
"language:ps",
"language:pt",
"language:quz",
"language:ro",
"language:ru",
"language:rw",
"language:si",
"language:sk",
"language:sl",
"language:so",
"language:sq",
"language:sr",
"language:st",
"language:sv",
"language:sw",
"language:szl",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tk",
"language:tl",
"language:tr",
"language:ts",
"language:tt",
"language:tyj",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:wa",
"language:xh",
"language:yi",
"language:yo",
"language:zh",
"language:zu",
"license:unknown",
"region:us"
] | null | A parallel corpus of GNOME localization files. Source: https://l10n.gnome.org
187 languages, 12,822 bitexts
total number of files: 113,344
total number of tokens: 267.27M
total number of sentence fragments: 58.12M | @InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
} | null | 1 | 75 | ---
annotations_creators:
- found
language_creators:
- found
language:
- af
- am
- an
- ang
- ar
- as
- ast
- az
- bal
- be
- bem
- bg
- bn
- bo
- br
- brx
- bs
- ca
- crh
- cs
- csb
- cy
- da
- de
- dv
- dz
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fo
- fr
- fur
- fy
- ga
- gd
- gl
- gn
- gu
- gv
- ha
- he
- hi
- hr
- hu
- hy
- ia
- id
- ig
- io
- is
- it
- ja
- jbo
- ka
- kg
- kk
- km
- kn
- ko
- kr
- ks
- ku
- ky
- la
- lg
- li
- lo
- lt
- lv
- mai
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- mus
- my
- nb
- nds
- ne
- nhn
- nl
- nn
- 'no'
- nqo
- nr
- nso
- oc
- or
- os
- pa
- pl
- ps
- pt
- quz
- ro
- ru
- rw
- si
- sk
- sl
- so
- sq
- sr
- st
- sv
- sw
- szl
- ta
- te
- tg
- th
- tk
- tl
- tr
- ts
- tt
- tyj
- ug
- uk
- ur
- uz
- vi
- wa
- xh
- yi
- yo
- zh
- zu
language_bcp47:
- ar-TN
- az-IR
- bg-BG
- bn-IN
- da-DK
- de-CH
- en-AU
- en-CA
- en-GB
- en-NZ
- en-US
- en-ZA
- es-AR
- es-CL
- es-CO
- es-CR
- es-DO
- es-EC
- es-ES
- es-GT
- es-HN
- es-MX
- es-NI
- es-PA
- es-PE
- es-PR
- es-SV
- es-UY
- es-VE
- fa-IR
- hi-IN
- it-IT
- ms-MY
- nb-NO
- nn-NO
- no-NB
- pt-BR
- pt-PT
- sr-ME
- tg-TJ
- tl-PH
- tr-TR
- ur-PK
- vi-VN
- zh-CN
- zh-HK
- zh-TW
license:
- unknown
multilinguality:
- multilingual
size_categories:
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: OpusGnome
dataset_info:
- config_name: ar-bal
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ar
- bal
splits:
- name: train
num_bytes: 5150
num_examples: 60
download_size: 2503
dataset_size: 5150
- config_name: bg-csb
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- bg
- csb
splits:
- name: train
num_bytes: 172545
num_examples: 1768
download_size: 29706
dataset_size: 172545
- config_name: ca-en_GB
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- ca
- en_GB
splits:
- name: train
num_bytes: 1007488
num_examples: 7982
download_size: 188727
dataset_size: 1007488
- config_name: cs-eo
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- eo
splits:
- name: train
num_bytes: 2895
num_examples: 73
download_size: 3055
dataset_size: 2895
- config_name: de-ha
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- ha
splits:
- name: train
num_bytes: 22899
num_examples: 216
download_size: 5287
dataset_size: 22899
- config_name: cs-tk
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- cs
- tk
splits:
- name: train
num_bytes: 1197731
num_examples: 18686
download_size: 98044
dataset_size: 1197731
- config_name: da-vi
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- da
- vi
splits:
- name: train
num_bytes: 9372
num_examples: 149
download_size: 5432
dataset_size: 9372
- config_name: en_GB-my
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- en_GB
- my
splits:
- name: train
num_bytes: 3298074
num_examples: 28232
download_size: 362750
dataset_size: 3298074
- config_name: el-sk
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- el
- sk
splits:
- name: train
num_bytes: 12121
num_examples: 150
download_size: 6116
dataset_size: 12121
- config_name: de-tt
features:
- name: id
dtype: string
- name: translation
dtype:
translation:
languages:
- de
- tt
splits:
- name: train
num_bytes: 134978
num_examples: 2169
download_size: 15891
dataset_size: 134978
config_names:
- ar-bal
- bg-csb
- ca-en_GB
- cs-eo
- cs-tk
- da-vi
- de-ha
- de-tt
- el-sk
- en_GB-my
---
# Dataset Card for Opus Gnome
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/GNOME.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
You can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/GNOME.php
E.g.
`dataset = load_dataset("opus_gnome", lang1="it", lang2="pl")`
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
```
{
'id': '0',
'translation': {
'ar': 'إعداد سياسة القفل',
'bal': 'تنظیم کتن سیاست کبل'
}
}
```
### Data Fields
Each instance has two fields:
- **id**: the id of the example
- **translation**: a dictionary containing translated texts in two languages.
### Data Splits
Each subset simply consists in a train set. We provide the number of examples for certain language pairs:
| | train |
|:---------|--------:|
| ar-bal | 60 |
| bg-csb | 10 |
| ca-en_GB | 7982 |
| cs-eo | 73 |
| de-ha | 216 |
| cs-tk | 18686 |
| da-vi | 149 |
| en_GB-my | 28232 |
| el-sk | 150 |
| de-tt | 2169 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
@InProceedings{TIEDEMANN12.463,
author = {J{\"o}rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
}
### Contributions
Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset. |
sepidmnorozy/Chinese_sentiment | 2022-08-15T23:09:45.000Z | [
"region:us"
] | sepidmnorozy | null | null | null | 3 | 75 | Entry not found |
SetFit/ade_corpus_v2_classification | 2022-09-05T14:14:53.000Z | [
"region:us"
] | SetFit | null | null | null | 0 | 75 | # ADE-Corpus-V2 Dataset: Adverse Drug Reaction Data.
This is a dataset for classification if a sentence is ADE-related (True) or not (False).
**Train size: 17,637**
**Test size: 5,879**
[Source dataset](https://huggingface.co/datasets/ade_corpus_v2)
[Paper](https://www.sciencedirect.com/science/article/pii/S1532046412000615)
|
rogerdehe/xfund | 2022-10-12T12:42:35.000Z | [
"task_categories:text-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"language:de",
"language:es",
"language:fr",
"language:it",
"language:ja",
"license:other",
"layoutlmv3",
"xfund",
"funsd",
"region:us"
] | rogerdehe | https://github.com/doc-analysis/XFUND | @inproceedings{xu-etal-2022-xfund,
title = "{XFUND}: A Benchmark Dataset for Multilingual Visually Rich Form Understanding",
author = "Xu, Yiheng and
Lv, Tengchao and
Cui, Lei and
Wang, Guoxin and
Lu, Yijuan and
Florencio, Dinei and
Zhang, Cha and
Wei, Furu",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.253",
doi = "10.18653/v1/2022.findings-acl.253",
pages = "3214--3224",
abstract = "Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. However, the existed research work has focused only on the English domain while neglecting the importance of multilingual generalization. In this paper, we introduce a human-annotated multilingual form understanding benchmark dataset named XFUND, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese). Meanwhile, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually rich document understanding. Experimental results show that the LayoutXLM model has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUND dataset. The XFUND dataset and the pre-trained LayoutXLM model have been publicly available at https://aka.ms/layoutxlm.",
} | null | 0 | 75 | ---
annotations_creators:
- found
language_creators:
- found
task_categories:
- text-classification
tags:
- layoutlmv3
- xfund
- funsd
language:
- de
- es
- fr
- it
- ja
license:
- other
multilinguality:
- multilingual
---
XFUND dataset
see more detail at [this](https://github.com/doc-analysis/XFUND)
### Citation Information
``` latex
@inproceedings{xu-etal-2022-xfund,
title = "{XFUND}: A Benchmark Dataset for Multilingual Visually Rich Form Understanding",
author = "Xu, Yiheng and
Lv, Tengchao and
Cui, Lei and
Wang, Guoxin and
Lu, Yijuan and
Florencio, Dinei and
Zhang, Cha and
Wei, Furu",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.253",
doi = "10.18653/v1/2022.findings-acl.253",
pages = "3214--3224",
abstract = "Multimodal pre-training with text, layout, and image has achieved SOTA performance for visually rich document understanding tasks recently, which demonstrates the great potential for joint learning across different modalities. However, the existed research work has focused only on the English domain while neglecting the importance of multilingual generalization. In this paper, we introduce a human-annotated multilingual form understanding benchmark dataset named XFUND, which includes form understanding samples in 7 languages (Chinese, Japanese, Spanish, French, Italian, German, Portuguese). Meanwhile, we present LayoutXLM, a multimodal pre-trained model for multilingual document understanding, which aims to bridge the language barriers for visually rich document understanding. Experimental results show that the LayoutXLM model has significantly outperformed the existing SOTA cross-lingual pre-trained models on the XFUND dataset. The XFUND dataset and the pre-trained LayoutXLM model have been publicly available at https://aka.ms/layoutxlm.",
}
``` |
bigbio/ebm_pico | 2022-12-22T15:44:33.000Z | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | bigbio | This corpus release contains 4,993 abstracts annotated with (P)articipants,
(I)nterventions, and (O)utcomes. Training labels are sourced from AMT workers and
aggregated to reduce noise. Test labels are collected from medical professionals. | @inproceedings{nye-etal-2018-corpus,
title = "A Corpus with Multi-Level Annotations of Patients, Interventions and Outcomes to Support Language Processing for Medical Literature",
author = "Nye, Benjamin and
Li, Junyi Jessy and
Patel, Roma and
Yang, Yinfei and
Marshall, Iain and
Nenkova, Ani and
Wallace, Byron",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P18-1019",
doi = "10.18653/v1/P18-1019",
pages = "197--207",
} | null | 0 | 75 |
---
language:
- en
bigbio_language:
- English
license: unknown
multilinguality: monolingual
bigbio_license_shortname: UNKNOWN
pretty_name: EBM NLP
homepage: https://github.com/bepnye/EBM-NLP
bigbio_pubmed: True
bigbio_public: True
bigbio_tasks:
- NAMED_ENTITY_RECOGNITION
---
# Dataset Card for EBM NLP
## Dataset Description
- **Homepage:** https://github.com/bepnye/EBM-NLP
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
This corpus release contains 4,993 abstracts annotated with (P)articipants,
(I)nterventions, and (O)utcomes. Training labels are sourced from AMT workers and
aggregated to reduce noise. Test labels are collected from medical professionals.
## Citation Information
```
@inproceedings{nye-etal-2018-corpus,
title = "A Corpus with Multi-Level Annotations of Patients, Interventions and Outcomes to Support Language Processing for Medical Literature",
author = "Nye, Benjamin and
Li, Junyi Jessy and
Patel, Roma and
Yang, Yinfei and
Marshall, Iain and
Nenkova, Ani and
Wallace, Byron",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P18-1019",
doi = "10.18653/v1/P18-1019",
pages = "197--207",
}
```
|
juancaruizc/lyrics | 2022-12-31T22:55:26.000Z | [
"region:us"
] | juancaruizc | null | null | null | 0 | 75 | Entry not found |
CVdatasets/ImageNet15_animals_unbalanced_aug1 | 2023-02-28T18:48:02.000Z | [
"region:us"
] | CVdatasets | null | null | null | 0 | 75 | ---
dataset_info:
features:
- name: labels
dtype:
class_label:
names:
'0': Italian_greyhound
'1': Coyote
'2': Beagle
'3': Rottweiler
'4': Hyena
'5': Greater_Swiss_Mountain_dog
'6': Triceratops
'7': French_bulldog
'8': Red_wolf
'9': Egyptian_cat
'10': Chihuahua
'11': Irish_terrier
'12': Tiger_cat
'13': White_wolf
'14': Timber_wolf
- name: img
dtype: image
- name: is_generated
dtype: bool
splits:
- name: validation
num_bytes: 60570648.125
num_examples: 1439
- name: train
num_bytes: 174270537.875
num_examples: 3705
download_size: 234762621
dataset_size: 234841186.0
---
# Dataset Card for "ImageNet15_animals_unbalanced_aug1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mstz/fertility | 2023-04-16T17:28:42.000Z | [
"task_categories:tabular-classification",
"size_categories:n<1K",
"language:en",
"license:cc",
"fertility",
"tabular_classification",
"binary_classification",
"multiclass_classification",
"UCI",
"region:us"
] | mstz | null | @misc{misc_fertility_244,
author = {Gil,David & Girela,Jose},
title = {{Fertility}},
year = {2013},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5Z01Z}}
} | null | 0 | 75 | ---
language:
- en
tags:
- fertility
- tabular_classification
- binary_classification
- multiclass_classification
- UCI
pretty_name: Fertility
size_categories:
- n<1K
task_categories:
- tabular-classification
configs:
- encoding
- fertility
license: cc
---
# Fertility
The [Fertility dataset](https://archive.ics.uci.edu/ml/datasets/Fertility) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Classify fertility abnormalities of patients.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|-------------------|---------------------------|------------------------------------------|
| encoding | | Encoding dictionary |
| fertility | Binary classification | Does the patient have fertility issues? |
# Usage
```python
from datasets import load_dataset
dataset = load_dataset("mstz/fertility", "fertility")["train"]
```
# Features
|**Feature** |**Type** |
|----------------------------------------|------------------|
| season_of_sampling | `[string]` |
| age_at_time_of_sampling | `[int8]` |
| has_had_childhood_diseases | `[bool]` |
| has_had_serious_trauma | `[bool]` |
| has_had_surgical_interventions | `[bool]` |
| has_had_high_fevers_in_the_past_year | `[string]` |
| frequency_of_alcohol_consumption | `[float16]` |
| smoking_frequency | `[string]` |
| number_of_sitting_hours_per_day | `[float16]` | |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.