id stringlengths 2 115 | lastModified stringlengths 24 24 | tags list | author stringlengths 2 42 ⌀ | description stringlengths 0 68.7k ⌀ | citation stringlengths 0 10.7k ⌀ | cardData null | likes int64 0 3.55k | downloads int64 0 10.1M | card stringlengths 0 1.01M |
|---|---|---|---|---|---|---|---|---|---|
patrickfrank1/chess-pgn-games | 2022-09-02T14:07:22.000Z | [
"license:cc0-1.0",
"region:us"
] | patrickfrank1 | null | null | null | 1 | 59 | ---
license: cc0-1.0
---
|
bigbio/mednli | 2022-12-22T15:24:43.000Z | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | bigbio | State of the art models using deep neural networks have become very good in learning an accurate
mapping from inputs to outputs. However, they still lack generalization capabilities in conditions
that differ from the ones encountered during training. This is even more challenging in specialized,
and knowledge intensive domains, where training data is limited. To address this gap, we introduce
MedNLI - a dataset annotated by doctors, performing a natural language inference task (NLI),
grounded in the medical history of patients. As the source of premise sentences, we used the
MIMIC-III. More specifically, to minimize the risks to patient privacy, we worked with clinical
notes corresponding to the deceased patients. The clinicians in our team suggested the Past Medical
History to be the most informative section of a clinical note, from which useful inferences can be
drawn about the patient. | @misc{https://doi.org/10.13026/c2rs98,
title = {MedNLI — A Natural Language Inference Dataset For The Clinical Domain},
author = {Shivade, Chaitanya},
year = 2017,
publisher = {physionet.org},
doi = {10.13026/C2RS98},
url = {https://physionet.org/content/mednli/}
} | null | 4 | 59 | ---
language:
- en
bigbio_language:
- English
license: other
multilinguality: monolingual
bigbio_license_short_name: PHYSIONET_LICENSE_1p5
pretty_name: MedNLI
homepage: https://physionet.org/content/mednli/1.0.0/
bigbio_pubmed: false
bigbio_public: false
bigbio_tasks:
- TEXTUAL_ENTAILMENT
paperswithcode_id: mednli
---
# Dataset Card for MedNLI
## Dataset Description
- **Homepage:** https://physionet.org/content/mednli/1.0.0/
- **Pubmed:** False
- **Public:** False
- **Tasks:** TE
State of the art models using deep neural networks have become very good in learning an accurate
mapping from inputs to outputs. However, they still lack generalization capabilities in conditions
that differ from the ones encountered during training. This is even more challenging in specialized,
and knowledge intensive domains, where training data is limited. To address this gap, we introduce
MedNLI - a dataset annotated by doctors, performing a natural language inference task (NLI),
grounded in the medical history of patients. As the source of premise sentences, we used the
MIMIC-III. More specifically, to minimize the risks to patient privacy, we worked with clinical
notes corresponding to the deceased patients. The clinicians in our team suggested the Past Medical
History to be the most informative section of a clinical note, from which useful inferences can be
drawn about the patient.
## Citation Information
```
@misc{https://doi.org/10.13026/c2rs98,
title = {MedNLI — A Natural Language Inference Dataset For The Clinical Domain},
author = {Shivade, Chaitanya},
year = 2017,
publisher = {physionet.org},
doi = {10.13026/C2RS98},
url = {https://physionet.org/content/mednli/}
}
```
|
lewtun/music_genres_small | 2022-11-03T13:36:49.000Z | [
"region:us"
] | lewtun | null | null | null | 2 | 59 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: song_id
dtype: int64
- name: genre_id
dtype: int64
- name: genre
dtype: string
splits:
- name: train
num_bytes: 392427659.9527852
num_examples: 1000
download_size: 390675126
dataset_size: 392427659.9527852
---
# Dataset Card for "music_genres_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
andstor/the_pile_github | 2023-03-20T23:39:53.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-classification",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"arxiv:2101.00027",
"arxiv:2201.07311... | andstor | The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality
datasets combined together. | @misc{gao2020pile,
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Leo Gao and Stella Biderman and Sid Black and Laurence Golding and Travis Hoppe and Charles Foster and Jason Phang and Horace He and Anish Thite and Noa Nabeshima and Shawn Presser and Connor Leahy},
year={2020},
eprint={2101.00027},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 3 | 59 | ---
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: The Pile GitHub
size_categories: []
source_datasets:
- original
tags: []
task_categories:
- text-generation
- fill-mask
- text-classification
task_ids: []
---
# Dataset Card for The Pile GitHub
## Table of Contents
- [Dataset Card for Smart Contracts](#dataset-card-for-the-pile-github)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [ElutherAI](https://pile.eleuther.ai)
- **Repository:** [GitHub](https://github.com/andstor/the-pile-github)
- **Paper:** [arXiv](https://arxiv.org/abs/2101.00027)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is the GitHub subset of EleutherAi/The Pile dataset and contains GitHub repositories. The programming languages are identified using the [guesslang library](https://github.com/yoeo/guesslang). A total of 54 programming languages are included in the dataset.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The following languages are covered by the dataset:
```
'Assembly', 'Batchfile', 'C', 'C#', 'C++', 'CMake', 'COBOL', 'CSS', 'CSV', 'Clojure', 'CoffeeScript', 'DM', 'Dart', 'Dockerfile', 'Elixir', 'Erlang', 'Fortran', 'Go', 'Groovy', 'HTML', 'Haskell', 'INI', 'JSON', 'Java', 'JavaScript', 'Julia', 'Kotlin', 'Lisp', 'Lua', 'Makefile', 'Markdown', 'Matlab', 'None', 'OCaml', 'Objective-C', 'PHP', 'Pascal', 'Perl', 'PowerShell', 'Prolog', 'Python', 'R', 'Ruby', 'Rust', 'SQL', 'Scala', 'Shell', 'Swift', 'TOML', 'TeX', 'TypeScript', 'Verilog', 'Visual Basic', 'XML', 'YAML'
```
The [guesslang library](https://github.com/yoeo/guesslang) is used to identify the programming languages. It has a guessing accuracy of above 90%. Hence, there will be some misclassifications in the language identification.
## Dataset Structure
### Data Instances
[More Information Needed]
```
{
'text': ...,
'meta': {'language': ...}
}
```
### Data Fields
- `text` (`string`): the source code.
- `meta` (`dict`): the metadata of the source code.
- `language` (`string`): the programming language of the source code.
### Data Splits
[More Information Needed]
| | train | validation | test |
|-------------------------|------:|-----------:|-----:|
| Input Sentences | | | |
| Average Sentence Length | | | |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
The data is purely a subset of the [EleutherAI/The Pile dataset](https://huggingface.co/datasets/the_pile). See the original [dataset](https://arxiv.org/abs/2201.07311) for more details.
## Additional Information
### Licensing Information
The Pile dataset was released on January 1st, 2021. It is licensed under the MIT License. See the [dataset](https://arxiv.org/abs/2201.07311) for more details.
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@article{pile,
title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
### Contributions
Thanks to [@andstor](https://github.com/andstor) for adding this dataset. |
tasksource/puzzte | 2023-05-31T08:43:41.000Z | [
"task_categories:text-classification",
"task_ids:natural-language-inference",
"task_ids:multi-input-text-classification",
"language:en",
"license:apache-2.0",
"region:us"
] | tasksource | null | null | null | 1 | 59 | ---
license: apache-2.0
task_ids:
- natural-language-inference
- multi-input-text-classification
task_categories:
- text-classification
language:
- en
---
https://bitbucket.org/RoxanaSz/puzzte/src/master/
```bib
@article{szomiu2021puzzle,
title={A Puzzle-Based Dataset for Natural Language Inference},
author={Szomiu, Roxana and Groza, Adrian},
journal={arXiv preprint arXiv:2112.05742},
year={2021}
}
``` |
IndianaUniversityDatasetsModels/MIMIC-medical-report | 2023-04-06T02:47:09.000Z | [
"region:us"
] | IndianaUniversityDatasetsModels | null | null | null | 2 | 59 | ---
dataset_info:
features:
- name: FileName
dtype: string
- name: INDICATION
dtype: string
- name: IMPRESSION
dtype: string
- name: FINDINGS
dtype: string
splits:
- name: train
num_bytes: 45203432.183416
num_examples: 83971
- name: test
num_bytes: 461341.9082919998
num_examples: 857
- name: validation
num_bytes: 461341.9082919998
num_examples: 857
download_size: 20175619
dataset_size: 46126116.00000001
---
# Dataset Card for "MIMIC-medical-report"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
mstz/segment | 2023-04-14T10:25:43.000Z | [
"license:cc-by-4.0",
"region:us"
] | mstz | null | @misc{misc_statlog_(image_segmentation)_147,
title = {{Statlog (Image Segmentation)}},
year = {1990},
howpublished = {UCI Machine Learning Repository},
note = {{DOI}: \\url{10.24432/C5P01G}}
} | null | 0 | 59 | ---
license: cc-by-4.0
---
|
BelleGroup/train_3.5M_CN | 2023-08-16T06:13:59.000Z | [
"task_categories:text2text-generation",
"size_categories:1M<n<10M",
"language:zh",
"license:gpl-3.0",
"region:us"
] | BelleGroup | null | null | null | 70 | 59 | ---
license: gpl-3.0
task_categories:
- text2text-generation
language:
- zh
size_categories:
- 1M<n<10M
---
## 内容
包含约350万条由[BELLE](https://github.com/LianjiaTech/BELLE)项目生成的中文指令数据。
## 样例
```
{
"id":"66623028",
"conversations":[
{
"from":"human",
"value":"归纳介绍三种不同类型的电影题材"
},
{
"from":"assistant",
"value":"1. 动作片:动作片通常强调动作和战斗场面,故事情节紧凑,充满了紧张和兴奋的氛围。主角一般是具有超凡实力或特殊技能的英雄人物,往往会与反派角色进行一系列激烈的格斗,追逐和爆破等酷炫场面。例如《速度与激情》和《复仇者联盟》等。\n2. 爱情片:爱情片是以爱情为主题的电影,着重呈现角色之间的感情纠葛,展现了爱情的美好和悲壮。情节与角色之间的发展令人深思,增加了观众的感性体验。这种电影的故事情节主要以人类与人类之间的情感变化为主题。例如《泰坦尼克号》和《让子弹飞》等。\n3. 科幻片:科幻片是以未来科技和未知世界为主题的电影,通常包括时间旅行、外星人、虚拟现实、未来社会等各种奇幻的元素。这种电影描绘了一种比现实更加宏伟、更加神秘和惊奇的世界。例如《星际穿越》和《发条橙》等。"
}
]
}
{
"id":"18205437",
"conversations":[
{
"from":"human",
"value":"根据给定的食材,编写一道美食菜谱。\n鸡腿肉、蘑菇、洋葱"
},
{
"from":"assistant",
"value":"鸡肉炒蘑菇洋葱\n食材:\n- 鸡腿肉 400克,去骨去皮切成小块\n- 蘑菇 200克,切片\n- 洋葱 1个,切丝\n- 生姜 适量,切丝\n- 酱油 2勺\n- 料酒 1勺\n- 盐 适量\n- 白胡椒粉 适量\n- 植物油 适量\n做法:\n1. 在一个碗里放入切好的鸡腿肉块,加入酱油、料酒和适量的白胡椒粉,搅拌均匀,腌制10分钟。\n2. 热锅凉油,加入姜丝翻炒,放入腌好的鸡肉块翻炒至变色。\n3. 加入洋葱丝和蘑菇,翻炒2-3分钟至蘑菇变软。\n4. 加入适量的盐和少量的水,盖上锅盖焖煮5分钟直至鸡肉熟透。\n5. 最后翻炒几下,使汤汁变浓稠即可。\n温馨提示:\n1. 在腌制鸡肉时,如果想让肉质更嫩滑,可以加入少量的淀粉和蛋清。\n2. 可以根据个人口味加入其他调料,如蒜末、辣椒等。\n3. 洋葱和蘑菇可以用其他蔬菜代替,如青椒、胡萝卜等。"
},
{
"from":"human",
"value":"太好了,我会试着照着你教的做一次,但是我最近买了一部新相机,想让你推荐一些好的拍照地点。"
},
{
"from":"assistant",
"value":"当地的风景区、公园、景点等等都是比较适合拍照的地点。如果你想要一些特别的地方,可以考虑一些建筑群,街道巷子、咖啡馆、书店等具有文艺气息的地方。"
},
{
"from":"human",
"value":"嗯,这些地方都很不错,但是我还想找一些可以体验当地文化和风俗的地点,你有什么好的建议吗?"
},
{
"from":"assistant",
"value":"如果你想要更深入地了解当地文化和风俗,可以考虑参加一些游学活动,比如参观当地的博物馆、参加民俗节庆、尝试当地美食等等。此外,也可以去一些小城镇,感受那里的古朴风情。"
}
]
}
```
### 字段:
```
id: 数据id
conversations: 数据内容,以对话形式给出,包括多轮和单轮对话的数据
```
### 补充:
针对该数据集,新增了指令类别字段,共包括13个类别,详情见
[train_3.5M_CN_With_Category](https://huggingface.co/datasets/BELLE-2/train_3.5M_CN_With_Category)
## 使用限制
仅允许将此数据集及使用此数据集生成的衍生物用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。
本数据集不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张。因使用本数据集带来的任何损害、纠纷,本项目不承担任何责任。 |
Patt/RTE_TH_drop | 2023-06-22T09:21:18.000Z | [
"task_categories:text-classification",
"language:en",
"language:th",
"arxiv:1907.04307",
"region:us"
] | Patt | null | null | null | 0 | 59 | ---
task_categories:
- text-classification
language:
- en
- th
---
# Dataset Card for RTE_TH_drop
### Dataset Description
This dataset is Thai translated version of [RTE](https://huggingface.co/datasets/super_glue/viewer/rte) using google translate with [Multilingual Universal Sentence Encoder](https://arxiv.org/abs/1907.04307) to calculate score for Thai translation.
Some line which score_hypothesis <= 0.5 or score_premise <= 0.7 had been droped. |
FreedomIntelligence/evol-instruct-korean | 2023-08-06T08:13:54.000Z | [
"region:us"
] | FreedomIntelligence | null | null | null | 1 | 59 | The dataset is used in the research related to [MultilingualSIFT](https://github.com/FreedomIntelligence/MultilingualSIFT). |
danjacobellis/vimeo90k_triplet | 2023-07-11T20:04:34.000Z | [
"region:us"
] | danjacobellis | null | null | null | 0 | 59 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': '0001'
'1': '0002'
'2': '0003'
'3': '0004'
'4': '0005'
'5': '0006'
'6': '0007'
'7': 0008
'8': 0009
'9': '0010'
'10': '0011'
'11': '0012'
'12': '0013'
'13': '0014'
'14': '0015'
'15': '0016'
'16': '0017'
'17': 0018
'18': 0019
'19': '0020'
'20': '0021'
'21': '0022'
'22': '0023'
'23': '0024'
'24': '0025'
'25': '0026'
'26': '0027'
'27': 0028
'28': 0029
'29': '0030'
'30': '0031'
'31': '0032'
'32': '0033'
'33': '0034'
'34': '0035'
'35': '0036'
'36': '0037'
'37': 0038
'38': 0039
'39': '0040'
'40': '0041'
'41': '0042'
'42': '0043'
'43': '0044'
'44': '0045'
'45': '0046'
'46': '0047'
'47': 0048
'48': 0049
'49': '0050'
'50': '0051'
'51': '0052'
'52': '0053'
'53': '0054'
'54': '0055'
'55': '0056'
'56': '0057'
'57': 0058
'58': 0059
'59': '0060'
'60': '0061'
'61': '0062'
'62': '0063'
'63': '0064'
'64': '0065'
'65': '0066'
'66': '0067'
'67': 0068
'68': 0069
'69': '0070'
'70': '0071'
'71': '0072'
'72': '0073'
'73': '0074'
'74': '0075'
'75': '0076'
'76': '0077'
'77': 0078
'78': 0079
'79': 0080
'80': 0081
'81': 0082
'82': 0083
'83': 0084
'84': 0085
'85': 0086
'86': 0087
'87': 0088
'88': 0089
'89': 0090
'90': 0091
'91': 0092
'92': 0093
'93': 0094
'94': 0095
'95': 0096
'96': 0097
'97': 0098
'98': 0099
'99': '0100'
'100': '0101'
'101': '0102'
'102': '0103'
'103': '0104'
'104': '0105'
'105': '0106'
'106': '0107'
'107': 0108
'108': 0109
'109': '0110'
'110': '0111'
'111': '0112'
'112': '0113'
'113': '0114'
'114': '0115'
'115': '0116'
'116': '0117'
'117': 0118
'118': 0119
'119': '0120'
'120': '0121'
'121': '0122'
'122': '0123'
'123': '0124'
'124': '0125'
'125': '0126'
'126': '0127'
'127': 0128
'128': 0129
'129': '0130'
'130': '0131'
'131': '0132'
'132': '0133'
'133': '0134'
'134': '0135'
'135': '0136'
'136': '0137'
'137': 0138
'138': 0139
'139': '0140'
'140': '0141'
'141': '0142'
'142': '0143'
'143': '0144'
'144': '0145'
'145': '0146'
'146': '0147'
'147': 0148
'148': 0149
'149': '0150'
'150': '0151'
'151': '0152'
'152': '0153'
'153': '0154'
'154': '0155'
'155': '0156'
'156': '0157'
'157': 0158
'158': 0159
'159': '0160'
'160': '0161'
'161': '0162'
'162': '0163'
'163': '0164'
'164': '0165'
'165': '0166'
'166': '0167'
'167': 0168
'168': 0169
'169': '0170'
'170': '0171'
'171': '0172'
'172': '0173'
'173': '0174'
'174': '0175'
'175': '0176'
'176': '0177'
'177': 0178
'178': 0179
'179': 0180
'180': 0181
'181': 0182
'182': 0183
'183': 0184
'184': 0185
'185': 0186
'186': 0187
'187': 0188
'188': 0189
'189': 0190
'190': 0191
'191': 0192
'192': 0193
'193': 0194
'194': 0195
'195': 0196
'196': 0197
'197': 0198
'198': 0199
'199': '0200'
'200': '0201'
'201': '0202'
'202': '0203'
'203': '0204'
'204': '0205'
'205': '0206'
'206': '0207'
'207': 0208
'208': 0209
'209': '0210'
'210': '0211'
'211': '0212'
'212': '0213'
'213': '0214'
'214': '0215'
'215': '0216'
'216': '0217'
'217': 0218
'218': 0219
'219': '0220'
'220': '0221'
'221': '0222'
'222': '0223'
'223': '0224'
'224': '0225'
'225': '0226'
'226': '0227'
'227': 0228
'228': 0229
'229': '0230'
'230': '0231'
'231': '0232'
'232': '0233'
'233': '0234'
'234': '0235'
'235': '0236'
'236': '0237'
'237': 0238
'238': 0239
'239': '0240'
'240': '0241'
'241': '0242'
'242': '0243'
'243': '0244'
'244': '0245'
'245': '0246'
'246': '0247'
'247': 0248
'248': 0249
'249': '0250'
'250': '0251'
'251': '0252'
'252': '0253'
'253': '0254'
'254': '0255'
'255': '0256'
'256': '0257'
'257': 0258
'258': 0259
'259': '0260'
'260': '0261'
'261': '0262'
'262': '0263'
'263': '0264'
'264': '0265'
'265': '0266'
'266': '0267'
'267': 0268
'268': 0269
'269': '0270'
'270': '0271'
'271': '0272'
'272': '0273'
'273': '0274'
'274': '0275'
'275': '0276'
'276': '0277'
'277': 0278
'278': 0279
'279': 0280
'280': 0281
'281': 0282
'282': 0283
'283': 0284
'284': 0285
'285': 0286
'286': 0287
'287': 0288
'288': 0289
'289': 0290
'290': 0291
'291': 0292
'292': 0293
'293': 0294
'294': 0295
'295': 0296
'296': 0297
'297': 0298
'298': 0299
'299': '0300'
'300': '0301'
'301': '0302'
'302': '0303'
'303': '0304'
'304': '0305'
'305': '0306'
'306': '0307'
'307': 0308
'308': 0309
'309': '0310'
'310': '0311'
'311': '0312'
'312': '0313'
'313': '0314'
'314': '0315'
'315': '0316'
'316': '0317'
'317': 0318
'318': 0319
'319': '0320'
'320': '0321'
'321': '0322'
'322': '0323'
'323': '0324'
'324': '0325'
'325': '0326'
'326': '0327'
'327': 0328
'328': 0329
'329': '0330'
'330': '0331'
'331': '0332'
'332': '0333'
'333': '0334'
'334': '0335'
'335': '0336'
'336': '0337'
'337': 0338
'338': 0339
'339': '0340'
'340': '0341'
'341': '0342'
'342': '0343'
'343': '0344'
'344': '0345'
'345': '0346'
'346': '0347'
'347': 0348
'348': 0349
'349': '0350'
'350': '0351'
'351': '0352'
'352': '0353'
'353': '0354'
'354': '0355'
'355': '0356'
'356': '0357'
'357': 0358
'358': 0359
'359': '0360'
'360': '0361'
'361': '0362'
'362': '0363'
'363': '0364'
'364': '0365'
'365': '0366'
'366': '0367'
'367': 0368
'368': 0369
'369': '0370'
'370': '0371'
'371': '0372'
'372': '0373'
'373': '0374'
'374': '0375'
'375': '0376'
'376': '0377'
'377': 0378
'378': 0379
'379': 0380
'380': 0381
'381': 0382
'382': 0383
'383': 0384
'384': 0385
'385': 0386
'386': 0387
'387': 0388
'388': 0389
'389': 0390
'390': 0391
'391': 0392
'392': 0393
'393': 0394
'394': 0395
'395': 0396
'396': 0397
'397': 0398
'398': 0399
'399': '0400'
'400': '0401'
'401': '0402'
'402': '0403'
'403': '0404'
'404': '0405'
'405': '0406'
'406': '0407'
'407': 0408
'408': 0409
'409': '0410'
'410': '0411'
'411': '0412'
'412': '0413'
'413': '0414'
'414': '0415'
'415': '0416'
'416': '0417'
'417': 0418
'418': 0419
'419': '0420'
'420': '0421'
'421': '0422'
'422': '0423'
'423': '0424'
'424': '0425'
'425': '0426'
'426': '0427'
'427': 0428
'428': 0429
'429': '0430'
'430': '0431'
'431': '0432'
'432': '0433'
'433': '0434'
'434': '0435'
'435': '0436'
'436': '0437'
'437': 0438
'438': 0439
'439': '0440'
'440': '0441'
'441': '0442'
'442': '0443'
'443': '0444'
'444': '0445'
'445': '0446'
'446': '0447'
'447': 0448
'448': 0449
'449': '0450'
'450': '0451'
'451': '0452'
'452': '0453'
'453': '0454'
'454': '0455'
'455': '0456'
'456': '0457'
'457': 0458
'458': 0459
'459': '0460'
'460': '0461'
'461': '0462'
'462': '0463'
'463': '0464'
'464': '0465'
'465': '0466'
'466': '0467'
'467': 0468
'468': 0469
'469': '0470'
'470': '0471'
'471': '0472'
'472': '0473'
'473': '0474'
'474': '0475'
'475': '0476'
'476': '0477'
'477': 0478
'478': 0479
'479': 0480
'480': 0481
'481': 0482
'482': 0483
'483': 0484
'484': 0485
'485': 0486
'486': 0487
'487': 0488
'488': 0489
'489': 0490
'490': 0491
'491': 0492
'492': 0493
'493': 0494
'494': 0495
'495': 0496
'496': 0497
'497': 0498
'498': 0499
'499': '0500'
'500': '0501'
'501': '0502'
'502': '0503'
'503': '0504'
'504': '0505'
'505': '0506'
'506': '0507'
'507': 0508
'508': 0509
'509': '0510'
'510': '0511'
'511': '0512'
'512': '0513'
'513': '0514'
'514': '0515'
'515': '0516'
'516': '0517'
'517': 0518
'518': 0519
'519': '0520'
'520': '0521'
'521': '0522'
'522': '0523'
'523': '0524'
'524': '0525'
'525': '0526'
'526': '0527'
'527': 0528
'528': 0529
'529': '0530'
'530': '0531'
'531': '0532'
'532': '0533'
'533': '0534'
'534': '0535'
'535': '0536'
'536': '0537'
'537': 0538
'538': 0539
'539': '0540'
'540': '0541'
'541': '0542'
'542': '0543'
'543': '0544'
'544': '0545'
'545': '0546'
'546': '0547'
'547': 0548
'548': 0549
'549': '0550'
'550': '0551'
'551': '0552'
'552': '0553'
'553': '0554'
'554': '0555'
'555': '0556'
'556': '0557'
'557': 0558
'558': 0559
'559': '0560'
'560': '0561'
'561': '0562'
'562': '0563'
'563': '0564'
'564': '0565'
'565': '0566'
'566': '0567'
'567': 0568
'568': 0569
'569': '0570'
'570': '0571'
'571': '0572'
'572': '0573'
'573': '0574'
'574': '0575'
'575': '0576'
'576': '0577'
'577': 0578
'578': 0579
'579': 0580
'580': 0581
'581': 0582
'582': 0583
'583': 0584
'584': 0585
'585': 0586
'586': 0587
'587': 0588
'588': 0589
'589': 0590
'590': 0591
'591': 0592
'592': 0593
'593': 0594
'594': 0595
'595': 0596
'596': 0597
'597': 0598
'598': 0599
'599': '0600'
'600': '0601'
'601': '0602'
'602': '0603'
'603': '0604'
'604': '0605'
'605': '0606'
'606': '0607'
'607': 0608
'608': 0609
'609': '0610'
'610': '0611'
'611': '0612'
'612': '0613'
'613': '0614'
'614': '0615'
'615': '0616'
'616': '0617'
'617': 0618
'618': 0619
'619': '0620'
'620': '0621'
'621': '0622'
'622': '0623'
'623': '0624'
'624': '0625'
'625': '0626'
'626': '0627'
'627': 0628
'628': 0629
'629': '0630'
'630': '0631'
'631': '0632'
'632': '0633'
'633': '0634'
'634': '0635'
'635': '0636'
'636': '0637'
'637': 0638
'638': 0639
'639': '0640'
'640': '0641'
'641': '0642'
'642': '0643'
'643': '0644'
'644': '0645'
'645': '0646'
'646': '0647'
'647': 0648
'648': 0649
'649': '0650'
'650': '0651'
'651': '0652'
'652': '0653'
'653': '0654'
'654': '0655'
'655': '0656'
'656': '0657'
'657': 0658
'658': 0659
'659': '0660'
'660': '0661'
'661': '0662'
'662': '0663'
'663': '0664'
'664': '0665'
'665': '0666'
'666': '0667'
'667': 0668
'668': 0669
'669': '0670'
'670': '0671'
'671': '0672'
'672': '0673'
'673': '0674'
'674': '0675'
'675': '0676'
'676': '0677'
'677': 0678
'678': 0679
'679': 0680
'680': 0681
'681': 0682
'682': 0683
'683': 0684
'684': 0685
'685': 0686
'686': 0687
'687': 0688
'688': 0689
'689': 0690
'690': 0691
'691': 0692
'692': 0693
'693': 0694
'694': 0695
'695': 0696
'696': 0697
'697': 0698
'698': 0699
'699': '0700'
'700': '0701'
'701': '0702'
'702': '0703'
'703': '0704'
'704': '0705'
'705': '0706'
'706': '0707'
'707': 0708
'708': 0709
'709': '0710'
'710': '0711'
'711': '0712'
'712': '0713'
'713': '0714'
'714': '0715'
'715': '0716'
'716': '0717'
'717': 0718
'718': 0719
'719': '0720'
'720': '0721'
'721': '0722'
'722': '0723'
'723': '0724'
'724': '0725'
'725': '0726'
'726': '0727'
'727': 0728
'728': 0729
'729': '0730'
'730': '0731'
'731': '0732'
'732': '0733'
'733': '0734'
'734': '0735'
'735': '0736'
'736': '0737'
'737': 0738
'738': 0739
'739': '0740'
'740': '0741'
'741': '0742'
'742': '0743'
'743': '0744'
'744': '0745'
'745': '0746'
'746': '0747'
'747': 0748
'748': 0749
'749': '0750'
'750': '0751'
'751': '0752'
'752': '0753'
'753': '0754'
'754': '0755'
'755': '0756'
'756': '0757'
'757': 0758
'758': 0759
'759': '0760'
'760': '0761'
'761': '0762'
'762': '0763'
'763': '0764'
'764': '0765'
'765': '0766'
'766': '0767'
'767': 0768
'768': 0769
'769': '0770'
'770': '0771'
'771': '0772'
'772': '0773'
'773': '0774'
'774': '0775'
'775': '0776'
'776': '0777'
'777': 0778
'778': 0779
'779': 0780
'780': 0781
'781': 0782
'782': 0783
'783': 0784
'784': 0785
'785': 0786
'786': 0787
'787': 0788
'788': 0789
'789': 0790
'790': 0791
'791': 0792
'792': 0793
'793': 0794
'794': 0795
'795': 0796
'796': 0797
'797': 0798
'798': 0799
'799': 0800
'800': 0801
'801': 0802
'802': 0803
'803': 0804
'804': 0805
'805': 0806
'806': 0807
'807': 0808
'808': 0809
'809': 0810
'810': 0811
'811': 0812
'812': 0813
'813': 0814
'814': 0815
'815': 0816
'816': 0817
'817': 0818
'818': 0819
'819': 0820
'820': 0821
'821': 0822
'822': 0823
'823': 0824
'824': 0825
'825': 0826
'826': 0827
'827': 0828
'828': 0829
'829': 0830
'830': 0831
'831': 0832
'832': 0833
'833': 0834
'834': 0835
'835': 0836
'836': 0837
'837': 0838
'838': 0839
'839': 0840
'840': 0841
'841': 0842
'842': 0843
'843': 0844
'844': 0845
'845': 0846
'846': 0847
'847': 0848
'848': 0849
'849': 0850
'850': 0851
'851': 0852
'852': 0853
'853': 0854
'854': 0855
'855': 0856
'856': 0857
'857': 0858
'858': 0859
'859': 0860
'860': 0861
'861': 0862
'862': 0863
'863': 0864
'864': 0865
'865': 0866
'866': 0867
'867': 0868
'868': 0869
'869': 0870
'870': 0871
'871': 0872
'872': 0873
'873': 0874
'874': 0875
'875': 0876
'876': 0877
'877': 0878
'878': 0879
'879': 0880
'880': 0881
'881': 0882
'882': 0883
'883': 0884
'884': 0885
'885': 0886
'886': 0887
'887': 0888
'888': 0889
'889': 0890
'890': 0891
'891': 0892
'892': 0893
'893': 0894
'894': 0895
'895': 0896
'896': 0897
'897': 0898
'898': 0899
'899': 0900
'900': 0901
'901': 0902
'902': 0903
'903': 0904
'904': 0905
'905': 0906
'906': 0907
'907': 0908
'908': 0909
'909': 0910
'910': 0911
'911': 0912
'912': 0913
'913': 0914
'914': 0915
'915': 0916
'916': 0917
'917': 0918
'918': 0919
'919': 0920
'920': 0921
'921': 0922
'922': 0923
'923': 0924
'924': 0925
'925': 0926
'926': 0927
'927': 0928
'928': 0929
'929': 0930
'930': 0931
'931': 0932
'932': 0933
'933': 0934
'934': 0935
'935': 0936
'936': 0937
'937': 0938
'938': 0939
'939': 0940
'940': 0941
'941': 0942
'942': 0943
'943': 0944
'944': 0945
'945': 0946
'946': 0947
'947': 0948
'948': 0949
'949': 0950
'950': 0951
'951': 0952
'952': 0953
'953': 0954
'954': 0955
'955': 0956
'956': 0957
'957': 0958
'958': 0959
'959': 0960
'960': 0961
'961': 0962
'962': 0963
'963': 0964
'964': 0965
'965': 0966
'966': 0967
'967': 0968
'968': 0969
'969': 0970
'970': 0971
'971': 0972
'972': 0973
'973': 0974
'974': 0975
'975': 0976
'976': 0977
'977': 0978
'978': 0979
'979': 0980
'980': 0981
'981': 0982
'982': 0983
'983': 0984
'984': 0985
'985': 0986
'986': 0987
'987': 0988
'988': 0989
'989': 0990
'990': 0991
'991': 0992
'992': 0993
'993': 0994
'994': 0995
'995': 0996
'996': 0997
'997': 0998
'998': 0999
'999': '1000'
splits:
- name: train
num_bytes: 35293401443.346
num_examples: 219573
download_size: 19206319187
dataset_size: 35293401443.346
---
# Dataset Card for "vimeo90k_triplet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AdiOO7/Llama-2 | 2023-07-24T20:40:11.000Z | [
"license:apache-2.0",
"region:us"
] | AdiOO7 | null | null | null | 0 | 59 | ---
license: apache-2.0
---
|
dim/leetcodesolutions_en_2k | 2023-08-15T12:34:04.000Z | [
"license:mit",
"region:us"
] | dim | null | null | null | 0 | 59 | ---
license: mit
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4847444
num_examples: 2048
download_size: 937266
dataset_size: 4847444
---
|
baoanhtran/guanaco-llama2-200 | 2023-09-24T12:48:00.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:n<1K",
"size_categories:1K<n<10K",
"size_categories:1... | baoanhtran | CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages \ | @misc{nguyen2023culturax,
title={CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages},
author={Thuat Nguyen and Chien Van Nguyen and Viet Dac Lai and Hieu Man and Nghia Trung Ngo and Franck Dernoncourt and Ryan A. Rossi and Thien Huu Nguyen},
year={2023},
eprint={2309.09400},
archivePrefix={arXiv},
primaryClass={cs.CL}
} | null | 1 | 59 | ---
pretty_name: CulturaX
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- als
- am
- an
- ar
- arz
- as
- ast
- av
- az
- azb
- ba
- bar
- bcl
- be
- bg
- bh
- bn
- bo
- bpy
- br
- bs
- bxr
- ca
- cbk
- ce
- ceb
- ckb
- cs
- cv
- cy
- da
- de
- dsb
- dv
- el
- eml
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- frr
- fy
- ga
- gd
- gl
- gn
- gom
- gu
- he
- hi
- hr
- hsb
- ht
- hu
- hy
- ia
- id
- ie
- ilo
- io
- is
- it
- ja
- jbo
- jv
- ka
- kk
- km
- kn
- ko
- krc
- ku
- kv
- kw
- ky
- la
- lb
- lez
- li
- lmo
- lo
- lrc
- lt
- lv
- mai
- mg
- mhr
- min
- mk
- ml
- mn
- mr
- mrj
- ms
- mt
- mwl
- my
- myv
- mzn
- nah
- nap
- nds
- ne
- new
- nl
- nn
- 'no'
- oc
- or
- os
- pa
- pam
- pl
- pms
- pnb
- ps
- pt
- qu
- rm
- ro
- ru
- rue
- sa
- sah
- scn
- sd
- sh
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- tg
- th
- tk
- tl
- tr
- tt
- tyv
- ug
- uk
- ur
- uz
- vec
- vi
- vls
- vo
- wa
- war
- wuu
- xal
- xmf
- yi
- yo
- yue
- zh
multilinguality:
- multilingual
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
- 10M<n<100M
- 100M<n<1B
- 1B<n<10B
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
extra_gated_prompt: "By completing the form below, you acknowledge that the provided data is offered as is. Although we anticipate no problems, you accept full responsibility for any repercussions resulting from the use of this data. Furthermore, you agree that the data must not be utilized for malicious or harmful purposes towards humanity."
extra_gated_fields:
Name: text
Email: text
Affiliation: text
Country: text
Usecase: text
I have explicitly check with my jurisdiction and I confirm that downloading CulturaX is legal in the country/region where I am located right now, and for the use case that I have described above: checkbox
You agree to not attempt to determine the identity of individuals in this dataset: checkbox
---
<div align="center">
<h1> CulturaX </h1>
<h3> Cleaned, Enormous, and Public: The Multilingual Fuel to Democratize Large Language Models for 167 Languages </h3>
</div>
## Dataset Description
- **Repository:** [https://github.com/nlp-uoregon/CulturaX](https://github.com/nlp-uoregon/CulturaX)
- **Papers:** [CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages](https://arxiv.org/abs/2309.09400)
## Dataset Summary
We present CulturaX, a substantial multilingual dataset with 6.3 trillion tokens in 167 languages, tailored for large language model (LLM) development. Our dataset undergoes meticulous cleaning and deduplication through a rigorous pipeline of multiple stages to accomplish the best quality for model training, including language identification, URL-based filtering, metric-based cleaning, document refinement, and data deduplication. We employ MinHash at document level to achieve fuzzy deduplication for the datasets in different languages. Our data cleaning framework includes diverse criteria and threshold selections, guided by extensive data samples, ensuring comprehensive noise filtering in various aspects. CulturaX is fully released to the public in HuggingFace to facilitate research and advancements in multilingual LLMs.
Our dataset combines the most recent iteration of mC4 (version 3.1.0) [1] with all accessible OSCAR corpora up to the present year, including 20.19, 21.09, 22.01, and 23.01 [2]. After deep cleaning and deduplication, CulturaX involves 16TB data in the parquet format (expanding to 27TB when unpacked). More than a half of our dataset is dedicated to non-English languages to significantly boost the data size and enhance the feasibility of training models in multilingual scenarios.
To obtain perplexity scores for data cleaning, we train a SentencePiece tokenizer and 5-gram Kneser-Ney language models as provided in the KenLM library [3] using the 20230501 dumps of Wikipedia. Our KenLM models are also released in HuggingFace: https://huggingface.co/uonlp/kenlm.
Details for the dataset can be found in our technical paper: [https://arxiv.org/abs/2309.09400](https://arxiv.org/abs/2309.09400)
You can download the dataset using Hugging Face datasets:
*You may need to follow these instructions to setup authentication before downloading the dataset: [https://huggingface.co/docs/huggingface_hub/quick-start#login](https://huggingface.co/docs/huggingface_hub/quick-start#login)*
```python
from datasets import load_dataset
ds = load_dataset("uonlp/CulturaX",
language="en",
use_auth_token=True)
```
### Languages
The supported languages and statistics for our dataset can be found below:
*(Note that the language code `als` and `eml` refer to `gsw` and `x-eml` in the OSCAR-2301 dataset.)*
| | Code | Language | # Documents | # Tokens | # Tokens (%) |
|----:|:-------|:-------------------------|:----------------|:--------------------|:------|
| 0 | en | English | 3,241,065,682 | 2,846,970,578,793 | 45.13 |
| 1 | ru | Russian | 799,310,908 | 737,201,800,363 | 11.69 |
| 2 | es | Spanish | 450,937,645 | 373,845,662,394 | 5.93 |
| 3 | de | German | 420,017,484 | 357,030,348,021 | 5.66 |
| 4 | fr | French | 363,754,348 | 319,332,674,695 | 5.06 |
| 5 | zh | Chinese | 218,624,604 | 227,055,380,882 | 3.60 |
| 6 | it | Italian | 211,309,922 | 165,446,410,843 | 2.62 |
| 7 | pt | Portuguese | 190,289,658 | 136,941,763,923 | 2.17 |
| 8 | pl | Polish | 142,167,217 | 117,269,087,143 | 1.86 |
| 9 | ja | Japanese | 111,188,475 | 107,873,841,351 | 1.71 |
| 10 | vi | Vietnamese | 102,411,180 | 98,453,464,077 | 1.56 |
| 11 | nl | Dutch | 117,392,666 | 80,032,209,900 | 1.27 |
| 12 | ar | Arabic | 74,027,952 | 69,354,335,076 | 1.10 |
| 13 | tr | Turkish | 94,207,460 | 64,292,787,164 | 1.02 |
| 14 | cs | Czech | 65,350,564 | 56,910,486,745 | 0.90 |
| 15 | fa | Persian | 59,531,144 | 45,947,657,495 | 0.73 |
| 16 | hu | Hungarian | 44,132,152 | 43,417,981,714 | 0.69 |
| 17 | el | Greek | 51,430,226 | 43,147,590,757 | 0.68 |
| 18 | ro | Romanian | 40,325,424 | 39,647,954,768 | 0.63 |
| 19 | sv | Swedish | 49,709,189 | 38,486,181,494 | 0.61 |
| 20 | uk | Ukrainian | 44,740,545 | 38,226,128,686 | 0.61 |
| 21 | fi | Finnish | 30,467,667 | 28,925,009,180 | 0.46 |
| 22 | ko | Korean | 20,557,310 | 24,765,448,392 | 0.39 |
| 23 | da | Danish | 25,429,808 | 22,921,651,314 | 0.36 |
| 24 | bg | Bulgarian | 24,131,819 | 22,917,954,776 | 0.36 |
| 25 | no | Norwegian | 18,907,310 | 18,426,628,868 | 0.29 |
| 26 | hi | Hindi | 19,665,355 | 16,791,362,871 | 0.27 |
| 27 | sk | Slovak | 18,582,517 | 16,442,669,076 | 0.26 |
| 28 | th | Thai | 20,960,550 | 15,717,374,014 | 0.25 |
| 29 | lt | Lithuanian | 13,339,785 | 14,247,110,836 | 0.23 |
| 30 | ca | Catalan | 15,531,777 | 12,530,288,006 | 0.20 |
| 31 | id | Indonesian | 23,251,368 | 12,062,966,061 | 0.19 |
| 32 | bn | Bangla | 12,436,596 | 9,572,929,804 | 0.15 |
| 33 | et | Estonian | 8,004,753 | 8,805,656,165 | 0.14 |
| 34 | sl | Slovenian | 7,335,378 | 8,007,587,522 | 0.13 |
| 35 | lv | Latvian | 7,136,587 | 7,845,180,319 | 0.12 |
| 36 | he | Hebrew | 4,653,979 | 4,937,152,096 | 0.08 |
| 37 | sr | Serbian | 4,053,166 | 4,619,482,725 | 0.07 |
| 38 | ta | Tamil | 4,728,460 | 4,378,078,610 | 0.07 |
| 39 | sq | Albanian | 5,205,579 | 3,648,893,215 | 0.06 |
| 40 | az | Azerbaijani | 5,084,505 | 3,513,351,967 | 0.06 |
| 41 | kk | Kazakh | 2,733,982 | 2,802,485,195 | 0.04 |
| 42 | ur | Urdu | 2,757,279 | 2,703,052,627 | 0.04 |
| 43 | ka | Georgian | 3,120,321 | 2,617,625,564 | 0.04 |
| 44 | hy | Armenian | 2,964,488 | 2,395,179,284 | 0.04 |
| 45 | is | Icelandic | 2,373,560 | 2,350,592,857 | 0.04 |
| 46 | ml | Malayalam | 2,693,052 | 2,100,556,809 | 0.03 |
| 47 | ne | Nepali | 3,124,040 | 2,061,601,961 | 0.03 |
| 48 | mk | Macedonian | 2,762,807 | 2,003,302,006 | 0.03 |
| 49 | mr | Marathi | 2,266,588 | 1,955,227,796 | 0.03 |
| 50 | mn | Mongolian | 1,928,828 | 1,850,667,656 | 0.03 |
| 51 | be | Belarusian | 1,643,486 | 1,791,473,041 | 0.03 |
| 52 | te | Telugu | 1,822,865 | 1,566,972,146 | 0.02 |
| 53 | gl | Galician | 1,785,963 | 1,382,539,693 | 0.02 |
| 54 | eu | Basque | 1,598,822 | 1,262,066,759 | 0.02 |
| 55 | kn | Kannada | 1,352,142 | 1,242,285,201 | 0.02 |
| 56 | gu | Gujarati | 1,162,878 | 1,131,730,537 | 0.02 |
| 57 | af | Afrikaans | 826,519 | 1,119,009,767 | 0.02 |
| 58 | my | Burmese | 865,575 | 882,606,546 | 0.01 |
| 59 | si | Sinhala | 753,655 | 880,289,097 | 0.01 |
| 60 | eo | Esperanto | 460,088 | 803,948,528 | 0.01 |
| 61 | km | Khmer | 1,013,181 | 746,664,132 | 0.01 |
| 62 | pa | Punjabi | 646,987 | 727,546,145 | 0.01 |
| 63 | cy | Welsh | 549,955 | 576,743,162 | 0.01 |
| 64 | ky | Kyrgyz | 570,922 | 501,442,620 | 0.01 |
| 65 | ga | Irish | 304,251 | 376,947,935 | 0.01 |
| 66 | ps | Pashto | 376,914 | 363,007,770 | 0.01 |
| 67 | am | Amharic | 243,349 | 358,206,762 | 0.01 |
| 68 | ku | Kurdish | 295,314 | 302,990,910 | 0.00 |
| 69 | tl | Filipino | 348,453 | 242,086,456 | 0.00 |
| 70 | yi | Yiddish | 141,156 | 217,584,643 | 0.00 |
| 71 | lo | Lao | 217,842 | 168,256,876 | 0.00 |
| 72 | fy | Western Frisian | 223,268 | 167,193,111 | 0.00 |
| 73 | sd | Sindhi | 109,162 | 147,487,058 | 0.00 |
| 74 | mg | Malagasy | 115,910 | 142,685,412 | 0.00 |
| 75 | or | Odia | 153,461 | 100,323,213 | 0.00 |
| 76 | as | Assamese | 52,627 | 83,787,896 | 0.00 |
| 77 | ug | Uyghur | 47,035 | 77,677,306 | 0.00 |
| 78 | uz | Uzbek | 87,219 | 75,250,787 | 0.00 |
| 79 | la | Latin | 48,968 | 44,176,580 | 0.00 |
| 80 | hr | Croatian | 460,690 | 40,796,811 | 0.00 |
| 81 | sw | Swahili | 66,506 | 30,708,309 | 0.00 |
| 82 | ms | Malay | 238,151 | 19,375,976 | 0.00 |
| 83 | br | Breton | 43,765 | 13,987,037 | 0.00 |
| 84 | sa | Sanskrit | 16,290 | 13,561,367 | 0.00 |
| 85 | gd | Scottish Gaelic | 8,408 | 4,796,485 | 0.00 |
| 86 | su | Sundanese | 1,554 | 1,308,460 | 0.00 |
| 87 | jv | Javanese | 2,058 | 625,429 | 0.00 |
| 88 | tg | Tajik | 483,835 | - | - |
| 89 | ceb | Cebuano | 263,890 | - | - |
| 90 | tt | Tatar | 218,102 | - | - |
| 91 | ckb | Central Kurdish | 172,035 | - | - |
| 92 | lb | Luxembourgish | 165,891 | - | - |
| 93 | mt | Maltese | 151,320 | - | - |
| 94 | nn | Norwegian Nynorsk | 126,083 | - | - |
| 95 | qu | Quechua | 1,202 | 72,101 | 0.00 |
| 96 | ba | Bashkir | 71,957 | - | - |
| 97 | arz | Egyptian Arabic | 71,625 | - | - |
| 98 | dv | Divehi | 66,702 | - | - |
| 99 | bo | Tibetan | 54,185 | - | - |
| 100 | sh | Serbian (Latin) | 45,619 | - | - |
| 101 | yo | Yoruba | 192 | 42,943 | 0.00 |
| 102 | bs | Bosnian | 1,237 | 39,768 | 0.00 |
| 103 | azb | South Azerbaijani | 29,833 | - | - |
| 104 | ht | Haitian Creole | 12 | 26,183 | 0.00 |
| 105 | war | Waray | 23,687 | - | - |
| 106 | cv | Chuvash | 22,570 | - | - |
| 107 | sah | Sakha | 22,141 | - | - |
| 108 | li | Limburgish | 206 | 18,532 | 0.00 |
| 109 | ce | Chechen | 17,322 | - | - |
| 110 | pnb | Western Panjabi | 15,625 | - | - |
| 111 | nds | Low German | 15,139 | - | - |
| 112 | tk | Turkmen | 14,393 | - | - |
| 113 | gn | Guarani | 103 | 12,708 | 0.00 |
| 114 | oc | Occitan | 10,556 | - | - |
| 115 | xmf | Mingrelian | 9,706 | - | - |
| 116 | ast | Asturian | 9,002 | - | - |
| 117 | os | Ossetic | 8,596 | - | - |
| 118 | mhr | Eastern Mari | 7,883 | - | - |
| 119 | pms | Piedmontese | 7,566 | - | - |
| 120 | als[*] | Swiss German | 6,936 | - | - |
| 121 | vo | Volapük | 6,621 | - | - |
| 122 | so | Somali | 39 | 6,053 | 0.00 |
| 123 | bpy | Bishnupriya | 5,087 | - | - |
| 124 | new | Newari | 4,344 | - | - |
| 125 | hsb | Upper Sorbian | 4,244 | - | - |
| 126 | lmo | Lombard | 3,530 | - | - |
| 127 | an | Aragonese | 2,746 | - | - |
| 128 | ilo | Iloko | 2,328 | - | - |
| 129 | mzn | Mazanderani | 1,914 | - | - |
| 130 | lez | Lezghian | 1,806 | - | - |
| 131 | rm | Romansh | 30 | 1,769 | 0.00 |
| 132 | krc | Karachay-Balkar | 1,745 | - | - |
| 133 | min | Minangkabau | 1,429 | - | - |
| 134 | kv | Komi | 1,396 | - | - |
| 135 | wa | Walloon | 1,383 | - | - |
| 136 | jbo | Lojban | 1,349 | - | - |
| 137 | io | Ido | 1,144 | - | - |
| 138 | mrj | Western Mari | 1,056 | - | - |
| 139 | gom | Goan Konkani | 721 | - | - |
| 140 | ia | Interlingua | 613 | - | - |
| 141 | av | Avaric | 438 | - | - |
| 142 | bh | Bihari languages | 265 | - | - |
| 143 | wuu | Wu Chinese | 222 | - | - |
| 144 | nah | Nahuatl languages | 131 | - | - |
| 145 | vec | Venetian | 113 | - | - |
| 146 | bxr | Russia Buriat | 100 | - | - |
| 147 | kw | Cornish | 94 | - | - |
| 148 | mai | Maithili | 93 | - | - |
| 149 | eml[*] | Emiliano-Romagnol | 91 | - | - |
| 150 | dsb | Lower Sorbian | 59 | - | - |
| 151 | xal | Kalmyk | 51 | - | - |
| 152 | lrc | Northern Luri | 43 | - | - |
| 153 | nap | Neapolitan | 31 | - | - |
| 154 | tyv | Tuvinian | 23 | - | - |
| 155 | scn | Sicilian | 21 | - | - |
| 156 | frr | Northern Frisian | 11 | - | - |
| 157 | mwl | Mirandese | 9 | - | - |
| 158 | myv | Erzya | 4 | - | - |
| 159 | ie | Interlingue | 4 | - | - |
| 160 | pam | Pampanga | 4 | - | - |
| 161 | bar | Bavarian | 3 | - | - |
| 162 | yue | Yue Chinese | 3 | - | - |
| 163 | cbk | Chavacano | 2 | - | - |
| 164 | bcl | Central Bikol | 1 | - | - |
| 165 | vls | West Flemish | 1 | - | - |
| 166 | rue | Rusyn | 1 | - | - |
### Dataset Structure
```json
{
"text": ...,
"timestamp": ...,
"url": ...,
"source": "mc4" | "OSCAR-xxxx",
}
```
## Considerations for Using the Data
As CulturaX is the cleaned version of the mC4 and OSCAR datasets, which were both extracted from CommonCrawl, personal and sensitive information might still contain personal and sensitive information.
This must be considered prior to using this dataset for any purpose, such as training deep learning models, etc.
## License Information
The licence terms for CulturaX strictly follows those of `mC4` and `OSCAR`. Please refer to both below licenses when using this dataset.
- [mC4 license](https://huggingface.co/datasets/allenai/c4#license)
- [OSCAR license](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301#licensing-information)
## Citation
To cite CulturaX, please use:
```
@misc{nguyen2023culturax,
title={CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages},
author={Thuat Nguyen and Chien Van Nguyen and Viet Dac Lai and Hieu Man and Nghia Trung Ngo and Franck Dernoncourt and Ryan A. Rossi and Thien Huu Nguyen},
year={2023},
eprint={2309.09400},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Reference
[1] Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual
pre-trained text-to-text transformer. In NAACL 2021. https://huggingface.co/datasets/mc4
[2] Pedro Javier Ortiz Suárez, Benoît Sagot, and Laurent Romary. 2019. Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures. In Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-
7) 2019. https://oscar-project.org/
[3] KenLM: Faster and smaller language model queries. In Proceedings of the Sixth
Workshop on Statistical Machine Translation, 2011. |
vikp/code_instructions_filtered_7k | 2023-08-29T17:15:26.000Z | [
"region:us"
] | vikp | null | null | null | 1 | 59 | ---
dataset_info:
features:
- name: input
dtype: string
- name: instruction
dtype: string
- name: output
dtype: string
- name: quality_prob
dtype: float64
- name: learning_prob
dtype: float64
splits:
- name: train
num_bytes: 3935708.9048315734
num_examples: 7526
download_size: 2442024
dataset_size: 3935708.9048315734
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "code_instructions_filtered_7k"
Filtered version of `sahil2801/code_instructions_120k` based on manual, quality, and learning value filters. |
dim/ru_instruct_gpt4 | 2023-08-31T15:07:24.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | dim | null | null | null | 0 | 59 | ---
license: cc-by-nc-4.0
dataset_info:
features:
- name: prompt
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 18294770
num_examples: 14222
download_size: 9373283
dataset_size: 18294770
---
|
dim/ultrachat_ru | 2023-08-31T16:44:16.000Z | [
"license:cc-by-nc-4.0",
"region:us"
] | dim | null | null | null | 0 | 59 | ---
license: cc-by-nc-4.0
dataset_info:
features:
- name: conversation
sequence: string
splits:
- name: train
num_bytes: 4495105
num_examples: 500
download_size: 1919370
dataset_size: 4495105
---
|
factored/saleswiz_is_relevant | 2023-09-14T20:01:32.000Z | [
"region:us"
] | factored | null | null | null | 0 | 59 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
splits:
- name: train
num_bytes: 187533.28961748633
num_examples: 640
- name: validation
num_bytes: 80580.71038251366
num_examples: 275
download_size: 178216
dataset_size: 268114.0
---
# Dataset Card for "saleswiz_is_relevant"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dim/resh_edu_short_prompts | 2023-09-01T22:24:23.000Z | [
"region:us"
] | dim | null | null | null | 0 | 59 | ---
dataset_info:
features:
- name: solution
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 12371576
num_examples: 2106
download_size: 5361614
dataset_size: 12371576
---
# Dataset Card for "resh_edu_short_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
manu/project_gutenberg | 2023-09-07T15:33:32.000Z | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:fr",
"language:en",
"language:zh",
"language:pt",
"language:pl",
"language:nl",
"language:ru",
"language:sv",
"language:it",
"language:de",
"language:es",
"region:us"
] | manu | null | null | null | 1 | 59 | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: de
num_bytes: 1070196924
num_examples: 3131
- name: en
num_bytes: 25616345280
num_examples: 61340
- name: es
num_bytes: 496728508
num_examples: 1202
- name: fr
num_bytes: 2338871137
num_examples: 5493
- name: it
num_bytes: 383733486
num_examples: 1008
- name: nl
num_bytes: 504939551
num_examples: 1420
- name: pl
num_bytes: 4864460
num_examples: 34
- name: pt
num_bytes: 204058452
num_examples: 1111
- name: ru
num_bytes: 943593
num_examples: 6
- name: sv
num_bytes: 116664385
num_examples: 388
- name: zh
num_bytes: 174238359
num_examples: 437
download_size: 14399256761
dataset_size: 30911584135
task_categories:
- text-generation
language:
- fr
- en
- zh
- pt
- pl
- nl
- ru
- sv
- it
- de
- es
pretty_name: Project Gutenberg
size_categories:
- 10K<n<100K
---
# Dataset Card for "Project Gutenberg"
Project Gutenberg is a library of over 70,000 free eBooks, hosted at https://www.gutenberg.org/.
All examples correspond to a single book, and contain a header and a footer of a few lines (delimited by a *** Start of *** and *** End of *** tags).
### Usage
```python
from datasets import load_dataset
ds = load_dataset("manu/project_gutenberg", split="fr", streaming=True)
print(next(iter(ds)))
```
### License
Full license is available here:
https://www.gutenberg.org/policy/license.html
#### Summary
For nearly all uses, in nearly all parts of the world, the opening words of all of our eBooks apply: This eBook is for the use of anyone anywhere in the United States and most other parts of the world at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at [www.gutenberg.org]. If you are not located in the United States, you’ll have to check the laws of the country where you are located before using this ebook.”
##### Using the Project Gutenberg Trademark
If you want to use the name Project Gutenberg anywhere in the ebooks you distribute or on the distribution medium or in advertising you have to obey these rules:
- you may only distribute verbatim copies of the ebooks. No changes are allowed to the ebook contents. (Though reformatting the ebook to a different file format is considered okay).
- If you charge money for the copies you distribute, you have to pay royalties to Project Gutenberg.
- You must refund your clients for defective copies or if they don’t agree with the Project Gutenberg license.
If you don’t agree with any of the above mentioned restrictions, you may not use the Project Gutenberg trademark. You may still distribute the ebooks if you strip the Project Gutenberg license and all references to Project Gutenberg. |
KyS/SpeakerEmbedding | 2023-09-13T07:01:20.000Z | [
"region:us"
] | KyS | null | null | null | 0 | 59 | ---
dataset_info:
features:
- name: Speakers
dtype: string
- name: Audio
struct:
- name: array
sequence: float64
- name: path
dtype: string
- name: sampling_rate
dtype: int64
splits:
- name: train
num_bytes: 29234994
num_examples: 46
download_size: 7249844
dataset_size: 29234994
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "SpeakerEmbedding"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
FanChen0116/bus_few4_8x_pvi | 2023-09-23T17:27:11.000Z | [
"region:us"
] | FanChen0116 | null | null | null | 0 | 59 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: labels
sequence:
class_label:
names:
'0': O
'1': I-from_location
'2': B-from_location
'3': B-leaving_date
'4': I-leaving_date
'5': I-to_location
'6': B-to_location
- name: request_slot
sequence: string
splits:
- name: train
num_bytes: 68839
num_examples: 280
- name: validation
num_bytes: 6900
num_examples: 35
- name: test
num_bytes: 70618
num_examples: 377
download_size: 13438
dataset_size: 146357
---
# Dataset Card for "bus_few4_8x_pvi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SebastianMoncaleano/cammel_v2_training | 2023-09-24T01:29:08.000Z | [
"region:us"
] | SebastianMoncaleano | null | null | null | 0 | 59 | Entry not found |
shyam-incedoinc/qa-finetune-dataset | 2023-10-02T10:33:15.000Z | [
"region:us"
] | shyam-incedoinc | null | null | null | 0 | 59 | Entry not found |
bookcorpusopen | 2023-04-05T09:41:59.000Z | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en"... | null | Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.
This version of bookcorpus has 17868 dataset items (books). Each item contains two fields: title and text. The title is the name of the book (just the file name) while text contains unprocessed book text. The bookcorpus has been prepared by Shawn Presser and is generously hosted by The-Eye. The-Eye is a non-profit, community driven platform dedicated to the archiving and long-term preservation of any and all data including but by no means limited to... websites, books, games, software, video, audio, other digital-obscura and ideas. | @InProceedings{Zhu_2015_ICCV,
title = {Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books},
author = {Zhu, Yukun and Kiros, Ryan and Zemel, Rich and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
} | null | 22 | 58 | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: BookCorpusOpen
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: bookcorpus
dataset_info:
features:
- name: title
dtype: string
- name: text
dtype: string
config_name: plain_text
splits:
- name: train
num_bytes: 6643435392
num_examples: 17868
download_size: 2404269430
dataset_size: 6643435392
---
# Dataset Card for BookCorpusOpen
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/soskek/bookcorpus/issues/27](https://github.com/soskek/bookcorpus/issues/27)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 2.40 GB
- **Size of the generated dataset:** 6.64 GB
- **Total amount of disk used:** 9.05 GB
### Dataset Summary
Books are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.
This version of bookcorpus has 17868 dataset items (books). Each item contains two fields: title and text. The title is the name of the book (just the file name) while text contains unprocessed book text. The bookcorpus has been prepared by Shawn Presser and is generously hosted by The-Eye. The-Eye is a non-profit, community driven platform dedicated to the archiving and long-term preservation of any and all data including but by no means limited to... websites, books, games, software, video, audio, other digital-obscura and ideas.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 2.40 GB
- **Size of the generated dataset:** 6.64 GB
- **Total amount of disk used:** 9.05 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "\"\\n\\nzONE\\n\\n## The end and the beginning\\n\\nby\\n\\nPhilip F. Blood\\n\\nSMASHWORDS EDITION\\n\\nVersion 3.55\\n\\nPUBLISHED BY:\\n\\nPhi...",
"title": "zone-the-end-and-the-beginning.epub.txt"
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `title`: a `string` feature.
- `text`: a `string` feature.
### Data Splits
| name |train|
|----------|----:|
|plain_text|17868|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The books have been crawled from smashwords.com, see their [terms of service](https://www.smashwords.com/about/tos) for more information.
A data sheet for this dataset has also been created and published in [Addressing "Documentation Debt" in Machine Learning Research: A Retrospective Datasheet for BookCorpus](https://arxiv.org/abs/2105.05241)
### Citation Information
```
@InProceedings{Zhu_2015_ICCV,
title = {Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books},
author = {Zhu, Yukun and Kiros, Ryan and Zemel, Rich and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}
```
### Contributions
Thanks to [@vblagoje](https://github.com/vblagoje) for adding this dataset. |
newsqa | 2023-06-01T14:59:49.000Z | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
] | null | NewsQA is a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles. | @inproceedings{trischler2017newsqa,
title={NewsQA: A Machine Comprehension Dataset},
author={Trischler, Adam and Wang, Tong and Yuan, Xingdi and Harris, Justin and Sordoni, Alessandro and Bachman, Philip and Suleman, Kaheer},
booktitle={Proceedings of the 2nd Workshop on Representation Learning for NLP},
pages={191--200},
year={2017}
} | null | 6 | 58 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- extractive-qa
paperswithcode_id: newsqa
pretty_name: NewsQA
dataset_info:
- config_name: combined-csv
features:
- name: story_id
dtype: string
- name: story_text
dtype: string
- name: question
dtype: string
- name: answer_char_ranges
dtype: string
splits:
- name: train
num_bytes: 465942194
num_examples: 119633
download_size: 0
dataset_size: 465942194
- config_name: combined-json
features:
- name: storyId
dtype: string
- name: text
dtype: string
- name: type
dtype: string
- name: questions
sequence:
- name: q
dtype: string
- name: isAnswerAbsent
dtype: int32
- name: isQuestionBad
dtype: int32
- name: consensus
struct:
- name: s
dtype: int32
- name: e
dtype: int32
- name: badQuestion
dtype: bool
- name: noAnswer
dtype: bool
- name: answers
sequence:
- name: sourcerAnswers
sequence:
- name: s
dtype: int32
- name: e
dtype: int32
- name: badQuestion
dtype: bool
- name: noAnswer
dtype: bool
- name: validated_answers
sequence:
- name: s
dtype: int32
- name: e
dtype: int32
- name: badQuestion
dtype: bool
- name: noAnswer
dtype: bool
- name: count
dtype: int32
splits:
- name: train
num_bytes: 68667276
num_examples: 12744
download_size: 0
dataset_size: 68667276
- config_name: split
features:
- name: story_id
dtype: string
- name: story_text
dtype: string
- name: question
dtype: string
- name: answer_token_ranges
dtype: string
splits:
- name: train
num_bytes: 362031288
num_examples: 92549
- name: test
num_bytes: 19763673
num_examples: 5126
- name: validation
num_bytes: 19862778
num_examples: 5166
download_size: 0
dataset_size: 401657739
config_names:
- combined-csv
- combined-json
- split
---
# Dataset Card for NewsQA
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.microsoft.com/en-us/research/project/newsqa-dataset/
- **Repository:** https://github.com/Maluuba/newsqa
- **Paper:** https://www.aclweb.org/anthology/W17-2623/
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
NewsQA is a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs.
Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles.
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
```
{'storyId': './cnn/stories/42d01e187213e86f5fe617fe32e716ff7fa3afc4.story',
'text': 'NEW DELHI, India (CNN) -- A high court in northern India on Friday acquitted a wealthy businessman facing the death sentence for the killing of a teen in a case dubbed "the house of horrors."\n\n\n\nMoninder Singh Pandher was sentenced to death by a lower court in February.\n\n\n\nThe teen was one of 19 victims -- children and young women -- in one of the most gruesome serial killings in India in recent years.\n\n\n\nThe Allahabad high court has acquitted Moninder Singh Pandher, his lawyer Sikandar B. Kochar told CNN.\n\n\n\nPandher and his domestic employee Surinder Koli were sentenced to death in February by a lower court for the rape and murder of the 14-year-old.\n\n\n\nThe high court upheld Koli\'s death sentence, Kochar said.\n\n\n\nThe two were arrested two years ago after body parts packed in plastic bags were found near their home in Noida, a New Delhi suburb. Their home was later dubbed a "house of horrors" by the Indian media.\n\n\n\nPandher was not named a main suspect by investigators initially, but was summoned as co-accused during the trial, Kochar said.\n\n\n\nKochar said his client was in Australia when the teen was raped and killed.\n\n\n\nPandher faces trial in the remaining 18 killings and could remain in custody, the attorney said.',
'type': 'train',
'questions': {'q': ['What was the amount of children murdered?',
'When was Pandher sentenced to death?',
'The court aquitted Moninder Singh Pandher of what crime?',
'who was acquitted',
'who was sentenced',
'What was Moninder Singh Pandher acquitted for?',
'Who was sentenced to death in February?',
'how many people died',
'How many children and young women were murdered?'],
'isAnswerAbsent': [0, 0, 0, 0, 0, 0, 0, 0, 0],
'isQuestionBad': [0, 0, 0, 0, 0, 0, 0, 0, 0],
'consensus': [{'s': 294, 'e': 297, 'badQuestion': False, 'noAnswer': False},
{'s': 261, 'e': 271, 'badQuestion': False, 'noAnswer': False},
{'s': 624, 'e': 640, 'badQuestion': False, 'noAnswer': False},
{'s': 195, 'e': 218, 'badQuestion': False, 'noAnswer': False},
{'s': 195, 'e': 218, 'badQuestion': False, 'noAnswer': False},
{'s': 129, 'e': 151, 'badQuestion': False, 'noAnswer': False},
{'s': 195, 'e': 218, 'badQuestion': False, 'noAnswer': False},
{'s': 294, 'e': 297, 'badQuestion': False, 'noAnswer': False},
{'s': 294, 'e': 297, 'badQuestion': False, 'noAnswer': False}],
'answers': [{'sourcerAnswers': [{'s': [294],
'e': [297],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [0], 'e': [0], 'badQuestion': [False], 'noAnswer': [True]},
{'s': [0], 'e': [0], 'badQuestion': [False], 'noAnswer': [True]}]},
{'sourcerAnswers': [{'s': [261],
'e': [271],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [258], 'e': [271], 'badQuestion': [False], 'noAnswer': [False]},
{'s': [261], 'e': [271], 'badQuestion': [False], 'noAnswer': [False]}]},
{'sourcerAnswers': [{'s': [26],
'e': [33],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [0], 'e': [0], 'badQuestion': [False], 'noAnswer': [True]},
{'s': [624], 'e': [640], 'badQuestion': [False], 'noAnswer': [False]}]},
{'sourcerAnswers': [{'s': [195],
'e': [218],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [195], 'e': [218], 'badQuestion': [False], 'noAnswer': [False]}]},
{'sourcerAnswers': [{'s': [0],
'e': [0],
'badQuestion': [False],
'noAnswer': [True]},
{'s': [195, 232],
'e': [218, 271],
'badQuestion': [False, False],
'noAnswer': [False, False]},
{'s': [0], 'e': [0], 'badQuestion': [False], 'noAnswer': [True]}]},
{'sourcerAnswers': [{'s': [129],
'e': [192],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [129], 'e': [151], 'badQuestion': [False], 'noAnswer': [False]},
{'s': [133], 'e': [151], 'badQuestion': [False], 'noAnswer': [False]}]},
{'sourcerAnswers': [{'s': [195],
'e': [218],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [195], 'e': [218], 'badQuestion': [False], 'noAnswer': [False]}]},
{'sourcerAnswers': [{'s': [294],
'e': [297],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [294], 'e': [297], 'badQuestion': [False], 'noAnswer': [False]}]},
{'sourcerAnswers': [{'s': [294],
'e': [297],
'badQuestion': [False],
'noAnswer': [False]},
{'s': [294], 'e': [297], 'badQuestion': [False], 'noAnswer': [False]}]}],
'validated_answers': [{'s': [0, 294],
'e': [0, 297],
'badQuestion': [False, False],
'noAnswer': [True, False],
'count': [1, 2]},
{'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []},
{'s': [624],
'e': [640],
'badQuestion': [False],
'noAnswer': [False],
'count': [2]},
{'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []},
{'s': [195],
'e': [218],
'badQuestion': [False],
'noAnswer': [False],
'count': [2]},
{'s': [129],
'e': [151],
'badQuestion': [False],
'noAnswer': [False],
'count': [2]},
{'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []},
{'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []},
{'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []}]}}
```
### Data Fields
Configuration: combined-csv
- 'story_id': An identifier of the story.
- 'story_text': Text of the story.
- 'question': A question about the story.
- 'answer_char_ranges': The raw data collected for character based indices to answers in story_text. E.g. 196:228|196:202,217:228|None. Answers from different crowdsourcers are separated by `|`; within those, multiple selections from the same crowdsourcer are separated by `,`. `None` means the crowdsourcer thought there was no answer to the question in the story. The start is inclusive and the end is exclusive. The end may point to whitespace after a token.
Configuration: combined-json
- 'storyId': An identifier of the story.
- 'text': Text of the story.
- 'type': Split type. Will be "train", "validation" or "test".
- 'questions': A list containing the following:
- 'q': A question about the story.
- 'isAnswerAbsent': Proportion of crowdsourcers that said there was no answer to the question in the story.
- 'isQuestionBad': Proportion of crowdsourcers that said the question does not make sense.
- 'consensus': The consensus answer. Use this field to pick the best continuous answer span from the text. If you want to know about a question having multiple answers in the text then you can use the more detailed "answers" and "validated_answers". The object can have start and end positions like in the example above or can be {"badQuestion": true} or {"noAnswer": true}. Note that there is only one consensus answer since it's based on the majority agreement of the crowdsourcers.
- 's': Start of the answer. The first character of the answer in "text" (inclusive).
- 'e': End of the answer. The last character of the answer in "text" (exclusive).
- 'badQuestion': The validator said that the question did not make sense.
- 'noAnswer': The crowdsourcer said that there was no answer to the question in the text.
- 'answers': The answers from various crowdsourcers.
- 'sourcerAnswers': The answer provided from one crowdsourcer.
- 's': Start of the answer. The first character of the answer in "text" (inclusive).
- 'e': End of the answer. The last character of the answer in "text" (exclusive).
- 'badQuestion': The crowdsourcer said that the question did not make sense.
- 'noAnswer': The crowdsourcer said that there was no answer to the question in the text.
- 'validated_answers': The answers from the validators.
- 's': Start of the answer. The first character of the answer in "text" (inclusive).
- 'e': End of the answer. The last character of the answer in "text" (exclusive).
- 'badQuestion': The validator said that the question did not make sense.
- 'noAnswer': The validator said that there was no answer to the question in the text.
- 'count': The number of validators that agreed with this answer.
Configuration: split
- 'story_id': An identifier of the story
- 'story_text': text of the story
- 'question': A question about the story.
- 'answer_token_ranges': Word based indices to answers in story_text. E.g. 196:202,217:228. Multiple selections from the same answer are separated by `,`. The start is inclusive and the end is exclusive. The end may point to whitespace after a token.
### Data Splits
| name | train | validation | test |
|---------------|-----------:|-----------:|--------:|
| combined-csv | 119633 | | |
| combined-json | 12744 | | |
| split | 92549 | 5166 | 5126 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
NewsQA Code
Copyright (c) Microsoft Corporation
All rights reserved.
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
© 2020 GitHub, Inc.
### Citation Information
@inproceedings{trischler2017newsqa,
title={NewsQA: A Machine Comprehension Dataset},
author={Trischler, Adam and Wang, Tong and Yuan, Xingdi and Harris, Justin and Sordoni, Alessandro and Bachman, Philip and Suleman, Kaheer},
booktitle={Proceedings of the 2nd Workshop on Representation Learning for NLP},
pages={191--200},
year={2017}
### Contributions
Thanks to [@rsanjaykamath](https://github.com/rsanjaykamath) for adding this dataset. |
giganticode/java-cmpx-v1 | 2022-07-01T20:32:52.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"multilinguality:monolingual",
"size_categories:unknown",
"language:java",
"license:mit",
"region:us"
] | giganticode | null | null | null | 1 | 58 | ---
language:
- java
license:
- mit
multilinguality:
- monolingual
pretty_name:
- java-cmpx
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
--- |
webis/args_me | 2022-09-21T12:09:09.000Z | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | webis | The args.me corpus (version 1.0, cleaned) comprises 382 545 arguments crawled from four debate portals in the middle of 2019. The debate portals are Debatewise, IDebate.org, Debatepedia, and Debate.org. The arguments are extracted using heuristics that are designed for each debate portal. | @dataset{yamen_ajjour_2020_4139439,
author = {Yamen Ajjour and
Henning Wachsmuth and
Johannes Kiesel and
Martin Potthast and
Matthias Hagen and
Benno Stein},
title = {args.me corpus},
month = oct,
year = 2020,
publisher = {Zenodo},
version = {1.0-cleaned},
doi = {10.5281/zenodo.4139439},
url = {https://doi.org/10.5281/zenodo.4139439}
} | null | 1 | 58 | ---
annotations_creators:
- machine-generated
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
pretty_name: Webis args.me argument corpus
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- text-retrieval
task_ids:
- document-retrieval
---
# Dataset Card for the args.me corpus
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Usage](#dataset-usage)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/4139439
- **Repository:** https://git.webis.de/code-research/arguana/args/args-framework
- **Paper:** [Building an Argument Search Engine for the Web](https://webis.de/downloads/publications/papers/wachsmuth_2017f.pdf)
- **Leaderboard:** https://touche.webis.de/
- **Point of Contact:** [Webis Group](https://webis.de/people.html)
### Dataset Summary
The args.me corpus (version 1.0, cleaned) comprises 382 545 arguments crawled from four debate portals in the middle of 2019. The debate portals are Debatewise, IDebate.org, Debatepedia, and Debate.org. The arguments are extracted using heuristics that are designed for each debate portal.
### Dataset Usage
```python
import datasets
args = datasets.load_dataset('webis/args_me', 'corpus', streaming=True)
args_iterator = iter(args)
for arg in args_iterator:
print(args['conclusion'])
print(args['id'])
print(args['argument'])
print(args['stance'])
break
```
### Supported Tasks and Leaderboards
Document Retrieval, Argument Retrieval for Controversial Questions
### Languages
The args.me corpus is monolingual; it only includes English (mostly en-US) documents.
## Dataset Structure
### Data Instances
#### Corpus
```
{'conclusion': 'Science is the best!',
'id': 'd6517702-2019-04-18T12:36:24Z-00000-000',
'argument': 'Science is aright I guess, but Physical Education (P.E) is better. Think about it, you could sit in a classroom for and hour learning about molecular reconfiguration, or you could play football with your mates. Why would you want to learn about molecular reconfiguration anyway? I think the argument here would be based on, healthy mind or healthy body. With science being the healthy mind and P.E being the healthy body. To work this one out all you got to do is ask Steven Hawkins. Only 500 words',
'stance': 'CON'}
```
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@dataset{yamen_ajjour_2020_4139439,
author = {Yamen Ajjour and
Henning Wachsmuth and
Johannes Kiesel and
Martin Potthast and
Matthias Hagen and
Benno Stein},
title = {args.me corpus},
month = oct,
year = 2020,
publisher = {Zenodo},
version = {1.0-cleaned},
doi = {10.5281/zenodo.4139439},
url = {https://doi.org/10.5281/zenodo.4139439}
}
```
|
jakartaresearch/inglish | 2022-08-19T15:23:15.000Z | [
"task_categories:translation",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:id",
"language:en",
"license:cc-by-4.0",
"indonesian",
"english",
"translation",
... | jakartaresearch | This dataset is built as a playground for beginner to make a translation model for Indonesian and English. | null | null | 0 | 58 | ---
annotations_creators:
- machine-generated
language:
- id
- en
language_creators:
- machine-generated
license:
- cc-by-4.0
multilinguality:
- translation
pretty_name: 'Inglish: Indonesian English Machine Translation Dataset'
size_categories:
- 10K<n<100K
source_datasets:
- original
tags:
- indonesian
- english
- translation
task_categories:
- translation
task_ids: []
---
# Dataset Card for Inglish: Indonesian English Translation Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The original dataset is from MSRP dataset. The translation was generated from google translate.
Feel free to check the translation if you find any error and open new discussion.
### Supported Tasks and Leaderboards
Machine Translation
### Languages
English - Indonesian
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@andreaschandra](https://github.com/andreaschandra) for adding this dataset. |
HighCWu/fill50k | 2023-02-15T15:45:27.000Z | [
"language:en",
"license:openrail",
"region:us"
] | HighCWu | null | null | null | 0 | 58 | ---
license: openrail
dataset_info:
features:
- name: image
dtype: image
- name: guide
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 454411979
num_examples: 50000
download_size: 316021131
dataset_size: 454411979
language:
- en
pretty_name: a
---
# Dataset Card for Fill50K
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset is converted from fill50k example dataset of [ControlNet](https://github.com/lllyasviel/ControlNet)
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[fill50k.zip](https://huggingface.co/lllyasviel/ControlNet/blob/main/training/fill50k.zip)
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
minoosh/IEMOCAP_Text_dataset | 2023-05-16T11:37:14.000Z | [
"region:us"
] | minoosh | null | null | null | 0 | 58 | ---
dataset_info:
features:
- name: TURN_NAME
dtype: string
- name: emotion
dtype:
class_label:
names:
'0': ang
'1': hap
'2': neu
'3': sad
- name: transcription
dtype: string
splits:
- name: Session1
num_bytes: 100099
num_examples: 1085
- name: Session2
num_bytes: 105421
num_examples: 1023
- name: Session3
num_bytes: 104680
num_examples: 1151
- name: Session4
num_bytes: 99515
num_examples: 1031
- name: Session5
num_bytes: 121589
num_examples: 1241
download_size: 261307
dataset_size: 531304
---
# Dataset Card for "IEMOCAP_Text_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
julien040/hacker-news-posts | 2023-06-06T17:04:37.000Z | [
"size_categories:1M<n<10M",
"source_datasets:Hacker News",
"language:en",
"license:cc-by-nc-sa-4.0",
"hacker news",
"region:us"
] | julien040 | null | null | null | 0 | 58 | ---
license: cc-by-nc-sa-4.0
language:
- en
tags:
- hacker news
pretty_name: Hacker News stories dataset
size_categories:
- 1M<n<10M
source_datasets:
- Hacker News
---
# Hacker News Stories Dataset
This is a dataset containing approximately 4 million stories from Hacker News, exported to a CSV file. The dataset includes the following fields:
- `id` (int64): The unique identifier of the story.
- `title` (string): The title of the story.
- `url` (string): The URL of the story.
- `score` (int64): The score of the story.
- `time` (int64): The time the story was posted, in Unix time.
- `comments` (int64): The number of comments on the story.
- `author` (string): The username of the person who posted the story.
## Accessing the Dataset
The dataset can be accessed through [Hugging Face Datasets](https://huggingface.co/datasets/julien040/hacker-news-posts). You can download the dataset in CSV format or use the Hugging Face Datasets library to load the dataset directly in your Python code.
## License
The dataset is made available under the [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
## Disclaimer
The dataset is provided as is, without warranty of any kind, express or implied. The owner of the dataset makes no representations or warranties, express or implied, regarding the dataset or its use. The owner of the dataset will not be liable for any damages arising out of or in connection with the use of the dataset.
## Updates
The dataset will be updated regularly to include new stories from Hacker News. |
d0rj/wikisum | 2023-06-16T11:24:25.000Z | [
"task_categories:summarization",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"abstractive-summarization",
"wiki",
"abstractive",
"arxiv:1801.10198",
"region:us"
] | d0rj | null | null | null | 1 | 58 | ---
dataset_info:
features:
- name: url
dtype: string
- name: title
dtype: string
- name: summary
dtype: string
- name: article
dtype: string
- name: step_headers
dtype: string
splits:
- name: train
num_bytes: 315275236
num_examples: 35775
- name: test
num_bytes: 17584216
num_examples: 2000
- name: validation
num_bytes: 17880851
num_examples: 2000
download_size: 194202865
dataset_size: 350740303
license:
- unknown
task_categories:
- summarization
language:
- en
multilinguality:
- monolingual
tags:
- abstractive-summarization
- wiki
- abstractive
pretty_name: 'WikiSum: Coherent Summarization Dataset for Efficient Human-Evaluation'
size_categories:
- 10K<n<100K
source_datasets:
- original
paperswithcode_id: wikisum
---
# wikisum
## Dataset Description
- **Homepage:** https://registry.opendata.aws/wikisum/
- **Repository:** https://github.com/tensorflow/tensor2tensor/tree/master/tensor2tensor/data_generators/wikisum
- **Paper:** [Generating Wikipedia by Summarizing Long Sequences](https://arxiv.org/abs/1801.10198)
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [nachshon](mailto:nachshon@amazon.com)
|
nampdn-ai/tiny-webtext | 2023-08-27T00:23:37.000Z | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"source_datasets:tiiuae/falcon-refinedweb",
"language:en",
"license:mit",
"arxiv:2306.01116",
"doi:10.57967/hf/1024",
"region:us"
] | nampdn-ai | null | null | null | 16 | 58 | ---
license: mit
viewer: true
task_categories:
- text-generation
language:
- en
pretty_name: Tiny WebText
size_categories:
- 1M<n<10M
source_datasets:
- tiiuae/falcon-refinedweb
---
# Tiny WebText
The Tiny WebText dataset is designed to help models learn about perception on web text while neutralizing the bias of the source text using critical thinking methods. By providing a rich and diverse set of texts, I aim to improve the ability of models to understand and analyze information in a more objective and unbiased manner.
This dataset can be used to train and evaluate natural language processing and machine learning models, with the goal of improving their perception and critical thinking skills. It is a valuable resource for researchers and developers, especially those working in the fields of machine learning and data engineering.
The dataset is augmented using subset of [Falcon-RefinedWeb](https://arxiv.org/abs/2306.01116), which provides additional augmented text using [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct) model to improve the quality and diversity of the texts.
I welcome any feedback or contributions. Thank you for your interest in my work! |
pvduy/rm_hh_helpful_only | 2023-06-28T15:48:24.000Z | [
"region:us"
] | pvduy | null | null | null | 0 | 58 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: selected
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 157748871
num_examples: 105434
- name: test
num_bytes: 8917763
num_examples: 6221
download_size: 99480494
dataset_size: 166666634
---
# Dataset Card for "rm_hh_helpful_only"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jjzha/gnehm | 2023-09-07T12:14:33.000Z | [
"language:de",
"license:cc-by-nc-sa-4.0",
"region:us"
] | jjzha | null | null | null | 0 | 58 | ---
license: cc-by-nc-sa-4.0
language: de
---
This is the skill dataset created by:
```
@inproceedings{gnehm-etal-2022-fine,
title = "Fine-Grained Extraction and Classification of Skill Requirements in {G}erman-Speaking Job Ads",
author = {Gnehm, Ann-sophie and
B{\"u}hlmann, Eva and
Buchs, Helen and
Clematide, Simon},
booktitle = "Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)",
month = nov,
year = "2022",
address = "Abu Dhabi, UAE",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.nlpcss-1.2",
doi = "10.18653/v1/2022.nlpcss-1.2",
pages = "14--24",
}
```
There are document delimiters indicated by `idx`.
Number of samples (sentences):
- train: 19889
- dev: 2332
- test: 2557
Sources:
- Swiss Job Market Monitor (SJMM): https://www.swissubase.ch/en/
Type of tags:
- BI(-ICT) and O tags with keys `tags_skill`
Sample:
```
{
"idx": 198,
"tokens": ["-", "besitzen", "fundierte", "Anwenderkenntnisse", "in", "MS-Office"],
"tags_skill": ["O", "O", "O", "O", "O", "B-ICT"]
}
``` |
Vezora/news_seniment_gpt_alpacaformat | 2023-08-14T04:49:12.000Z | [
"license:apache-2.0",
"region:us"
] | Vezora | null | null | null | 0 | 58 | ---
license: apache-2.0
---
This dataset is a alpaca formatted version of "oliverwang15/news_with_gpt_instructions" (https://huggingface.co/datasets/oliverwang15/news_with_gpt_instructions)
20k examples of grading senitment using gpt (unclear which model) (used to train fingptv3).
For ALPACA LORA users:
Modules you can target with lora:"gate_proj", "down_proj", "up_proj", "q_proj", "v_proj", "k_proj", "o_proj"
Most lora models use:"q_proj", "v_proj", "k_proj", "o_proj"
Platypus which got terrific results: "gate_proj", "down_proj", "up_proj"
Research on targeting certain modules still needs to be done, but if you don't want to train over a previously trained models newly learned abilities, target different modules than the ones used for original training.
Hyper perameters used by Platypus:
Hyperparameters for 13B and 70B Models
Hyperparameter Platypus2-13B / 70B
batch size 16
micro batch size 1
num epochs 1
learning rate 4e-4 / 3e-4
cutoff len 4096
lora rank 16
lora alpha 16
lora dropout 0.05
lora target modules gate_proj, down_proj, up_proj
train on inputs False
add eos token False
group by length False
prompt template alpaca
lr scheduler cosine
warmup steps 100
I would reccomend using a batch size of 4-10, and cutt off length to ≤ 2048 to avoid using vram issues. Load_in_4bit, Normal Float, and bf16. For single 24 gig card.
If training with oobabooga you must edit the "training.py" file in the "oobabooga_windows\text-generation-webui\modules" folder. In line 49 edit standard modules to the modules you would like to target.
If training with alpaca lora use the argument --lora_target_modules when running the train.py command. To load in 4bit you must edit the train file, adding load in 4 bit, bf16, and normal float quant.
|
yentinglin/traditional_mandarin_instructions | 2023-10-07T08:45:00.000Z | [
"task_categories:conversational",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:zh",
"license:cc-by-nc-4.0",
"arxiv:2305.13711",
"arxiv:2104.09864",
"region:us"
] | yentinglin | null | null | null | 10 | 58 | ---
license: cc-by-nc-4.0
task_categories:
- conversational
- text-generation
- text2text-generation
language:
- zh
pretty_name: Traditional Chinese Instruction-tuning Set
size_categories:
- 100K<n<1M
---
# Language Models for Taiwanese Culture
<p align="center">
✍️ <a href="https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2" target="_blank">Online Demo</a>
•
🤗 <a href="https://huggingface.co/yentinglin" target="_blank">HF Repo</a> • 🐦 <a href="https://twitter.com/yentinglin56" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/pdf/2305.13711.pdf" target="_blank">[Paper Coming Soon]</a>
• 👨️ <a href="https://yentingl.com/" target="_blank">Yen-Ting Lin</a>
<br/><br/>
<img src="https://www.csie.ntu.edu.tw/~miulab/taiwan-llama/logo-v2.png" width="100"> <br/>
<a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE">
<img src="https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg"></a>
<a href="https://github.com/tatsu-lab/stanford_alpaca/blob/main/DATA_LICENSE">
<img src="https://img.shields.io/badge/Data%20License-CC%20By%20NC%204.0-red.svg"></a>
<br/>
</p>
## Overview
Taiwan-LLaMa is a full parameter fine-tuned model based on LLaMa 2 for Traditional Mandarin applications.
**Taiwan-LLaMa v1.0** pretrained on over 5 billion tokens and instruction-tuned on over 490k conversations both in traditional mandarin.
## Demo
A live demonstration of the model can be accessed at [Hugging Face Spaces](https://huggingface.co/spaces/yentinglin/Taiwan-LLaMa2).
## Key Features
1. **Traditional Mandarin Support**: The model is fine-tuned to understand and generate text in Traditional Mandarin, making it suitable for Taiwanese culture and related applications.
2. **Instruction-Tuned**: Further fine-tuned on conversational data to offer context-aware and instruction-following responses.
3. **Performance on Vicuna Benchmark**: Taiwan-LLaMa's relative performance on Vicuna Benchmark is measured against models like GPT-4 and ChatGPT. It's particularly optimized for Taiwanese culture.
4. **Flexible Customization**: Advanced options for controlling the model's behavior like system prompt, temperature, top-p, and top-k are available in the demo.
## Work in progress
- [ ] **Improved pretraining**: A refined pretraining process (e.g. more data from Taiwan, training strategies) is under development, aiming to enhance model performance for better Taiwanese culture.
- [ ] **Extend max length**: Utilizing the Rope mechanism as described in [the paper](https://arxiv.org/abs/2104.09864), the model's length will be extended from 4k to 8k.
## Taiwanese Culture Examples
| Question | ChatGPT | **Taiwan LLaMa v1.0** | **Taiwan LLaMa v0.0 (w/o pretraining)** | Llama2_Traditional_Chinese_13b_Chat |
|:-|:-|:-|:-|:-|
| 以惡地景觀著名的是高雄哪一個景點? | 以惡地地景著名的是高雄市的旗津。 | 高雄以擁有惡地景觀著名的景點是田寮月世界。✅ | 以惡地景觀著名的是高雄市的旗津島。旗津島是一個小島,... | 高雄市內有多個以惡地景觀著名的景點,其中以下面幾個最為著名: 1. 高雄捷運紅線沿途景點:高雄市政府為了推廣高雄 |
| 唐鳳是誰? | 唐鳳(Audrey Tang)是台灣的政治家、社會活動家和軟體工程師。她於2016年擔任台灣行政院數位政務委員,是台灣政府歷史上首位沒有政黨背景的政務官。唐鳳在軟體工程領域有豐富的經驗。 ✅ | 唐鳳是政治評論家、資訊社會學研究者,也是台灣第一位擔任政府數位政委的人。 ✅ | 唐鳳(Tang Feng)是一位著名的中國演員和歌手。他出生於1974年12月17日,出生地點在中國北京。唐鳳在1995年開始他的演藝生涯,並在同年發行了他的首張專輯《我不孤單》。此後,他發行了許多受歡迎的歌曲,並在多部電視劇和電影中擔任主角。唐鳳因其精湛的演技和迷人的歌聲而被譽為“亞洲魅力男王”。他的音樂獲得了多個獎項,包括“華表最佳新男演唱獎”、“華表最佳歌曲獎”和“華表最佳舞蹈獎”等。 | 唐鳳是唐朝時期的一位著名文學家,他的代表作品包括《陰騭》、《狂凡子》等。他的文學風格具有強烈的個性和創意,被譽為「唐代文學之父」。 |
| NTU 在哪? | NTU位於新加坡,具體地址是50 Nanyang Avenue, Singapore 639798。 | NTU 是國立臺灣大學的縮寫。它是臺灣的一所頂尖研究型大學,位於臺北市。 ✅| NTU 是台灣的一所大學,位於台北市。✅ | NTU 的總部位於台灣新北市板橋區,佔地面積約為1,000公頃。|
| 甚麼是22K? | "22K" 可能指的是金的純度,尤其是在珠寶和金飾方面。 | "22K"是指台灣的大學畢業生的起薪,大約是新台幣22,000元。這個詞被用來形容大學畢業生的起薪偏低,並且在就業市場上對於大學畢業生的需求不足。✅| 22K是指溫度尺度上的22,000K(千度)。1K代表一度,即溫度差距,是攝氏溫標(攝氏度)的一百萬分之一。因此,22K等於-253.15°C(攝氏溫標)或-423.67°F(華氏溫標)。 | 22K是一種簡寫,即22,000。它是一個數字,表示22,000個單位。|
| 肉圓要不要加香菜? | 這是個個人口味的問題,有些人喜歡在肉圓中加入香菜,因為香菜的清香可以增添風味;而有些人則不喜歡香菜的味道,覺得它會蓋過其他食材的味道。所以要不要加香菜,可以根據個人喜好來決定。 | 這是一個個人喜好的問題,有些人喜歡在肉圓裡加香菜,有些人不喜歡。**在台灣**,肉圓加香菜是一種常見的做法,這是一種在肉圓上加香菜的做法。然而,在其他國這是一個文化差異的問題。😊 | 肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。香菜通常在製作過程中炒香,可以增添一些芳香和美味的味道。如果你想要加入香菜,可以在製作過程中炒香一些香菜,然後將其混入肉圓中。肉圓加香菜可以增添一些芳香和美味的口感,如果你喜歡香菜的味道,可以試試加入肉圓中。| 加香菜的肉圓味道更好,但是不加香菜的肉圓也很好吃。如果你喜歡香菜的味道,那麼加上香菜就好了。如果你不喜歡香菜,那麼不加也不會影響肉 |
## Model
We provide a number of model checkpoints that we trained. Please find them on Hugging Face [here](https://huggingface.co/models?search=taiwan-llama). Here are some quick links to the checkpoints that are finetuned from LLaMa 2:
| **Model** | **13B** |
|--------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| **Taiwan-LLaMa v1.0** (_better for Taiwanese Culture_) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v1.0" target="_blank">yentinglin/Taiwan-LLaMa-v1.0</a> |
| Taiwan-LLaMa v0.9 (partial instruction set) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.9" target="_blank">yentinglin/Taiwan-LLaMa-v0.9</a> |
| Taiwan-LLaMa v0.0 (no Traditional Mandarin pretraining) | 🤗 <a href="https://huggingface.co/yentinglin/Taiwan-LLaMa-v0.0" target="_blank">yentinglin/Taiwan-LLaMa-v0.0</a> |
## Data
Here are some quick links to the datasets that we used to train the models:
| **Dataset** | **Link** |
|---------------------------------|-------------------------------------------------------------------------------------------------------------------------------|
| **Instruction-tuning** | 🤗 <a href="https://huggingface.co/datasets/yentinglin/traditional_mandarin_instructions" target="_blank">yentinglin/traditional_mandarin_instructions</a> |
| Traditional Mandarin Pretraining | 🤗 <a href="https://huggingface.co/datasets/yentinglin/zh_TW_c4" target="_blank">yentinglin/zh_TW_c4</a> |
## Architecture
Taiwan-LLaMa is based on LLaMa 2, leveraging transformer architecture, <a href="https://github.com/Dao-AILab/flash-attention" target="_blank">flash attention 2</a>, and bfloat16.
It includes:
* Pretraining Phase: Pretrained on a vast corpus of over 5 billion tokens, extracted from common crawl in Traditional Mandarin.
* Fine-tuning Phase: Further instruction-tuned on over 490k multi-turn conversational data to enable more instruction-following and context-aware responses.
## Generic Capabilities on Vicuna Benchmark
The data is translated into traditional mandarin for evaluating the general capability.
<img src="./images/zhtw_vicuna_bench_chatgptbaseline.png" width="700">
The scores are calculated with ChatGPT as the baseline, represented as 100%. The other values show the relative performance of different models compared to ChatGPT.
| Language Model | Relative Score (%) |
|-------------------------------------|--------------------|
| GPT-4 | 102.59% |
| ChatGPT | 100.00% |
| **Taiwan-LLaMa v1.0** | 76.76% |
| Claude-Instant-1.2 | 74.04% |
| Llama2_Traditional_Chinese_13b_Chat | 56.21% |
## How to deploy the model on my own machine?
We recommend hosting models with [🤗 Text Generation Inference](https://github.com/huggingface/text-generation-inference). Please see their [license](https://github.com/huggingface/text-generation-inference/blob/main/LICENSE) for details on usage and limitations.
```bash
bash run_text_generation_inference.sh "yentinglin/Taiwan-LLaMa" NUM_GPUS DIR_TO_SAVE_MODEL PORT MAX_INPUT_LEN MODEL_MAX_LEN
```
Prompt format follows vicuna-v1.1 template:
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {user} ASSISTANT:
```
## Setup development environment
```bash
conda create -n taiwan-llama python=3.10 -y
conda activate taiwan-llama
pip install -r requirements.txt
```
## Citations
If you use our code, data, or models in your research, please cite this repository. You can use the following BibTeX entry:
```bibtex
@inproceedings{lin-chen-2023-llm,
title = "{LLM}-Eval: Unified Multi-Dimensional Automatic Evaluation for Open-Domain Conversations with Large Language Models",
author = "Lin, Yen-Ting and Chen, Yun-Nung",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.5",
pages = "47--58"
}
@misc{taiwanllama,
author={Lin, Yen-Ting and Chen, Yun-Nung},
title={Taiwanese-Aligned Language Models based on Meta-Llama2},
year={2023},
url={https://github.com/adamlin120/Taiwan-LLaMa},
note={Code and models available at https://github.com/adamlin120/Taiwan-LLaMa},
}
```
## Collaborate With Us
If you are interested in contributing to the development of Traditional Mandarin language models, exploring new applications, or leveraging Taiwan-LLaMa for your specific needs, please don't hesitate to contact us. We welcome collaborations from academia, industry, and individual contributors.
## License
The code in this project is licensed under the Apache 2.0 License - see the [LICENSE](LICENSE) file for details.
The models included in this project are licensed under the LLAMA 2 Community License. See the [LLAMA2 License](https://github.com/facebookresearch/llama/blob/main/LICENSE) for full details.
## OpenAI Data Acknowledgment
The data included in this project were generated using OpenAI's models and are subject to OpenAI's Terms of Use. Please review [OpenAI's Terms of Use](https://openai.com/policies/terms-of-use) for details on usage and limitations.
## Acknowledgements
We thank [Meta LLaMA team](https://github.com/facebookresearch/llama) and [Vicuna team](https://github.com/lm-sys/FastChat) for their open-source efforts in democratizing large language models.
|
dim/gpt_roleplay_realm | 2023-08-31T15:26:55.000Z | [
"license:cc-by-nd-4.0",
"region:us"
] | dim | null | null | null | 0 | 58 | ---
license: cc-by-nd-4.0
dataset_info:
features:
- name: conversation
sequence: string
- name: name
dtype: string
- name: char_description
dtype: string
splits:
- name: train
num_bytes: 26058509
num_examples: 8700
download_size: 8069442
dataset_size: 26058509
---
|
dim/dolphin_ru_3k | 2023-08-31T20:24:23.000Z | [
"region:us"
] | dim | null | null | null | 0 | 58 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 8490195.387822216
num_examples: 3000
download_size: 4148079
dataset_size: 8490195.387822216
---
# Dataset Card for "dolphin_ru_3k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dim/runne_prompts | 2023-09-02T16:20:49.000Z | [
"region:us"
] | dim | null | null | null | 0 | 58 | ---
dataset_info:
features:
- name: text
dtype: string
- name: parsed_entities
dtype: string
splits:
- name: train
num_bytes: 2636744
num_examples: 537
download_size: 1142735
dataset_size: 2636744
---
# Dataset Card for "runne_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
liy140/multidomain-measextract-corpus | 2023-09-12T08:09:43.000Z | [
"task_categories:token-classification",
"size_categories:n<1K",
"language:en",
"chemistry",
"biology",
"region:us"
] | liy140 | null | null | null | 0 | 58 | ---
configs:
- config_name: measeval
data_files:
- split: train
path: measeval_paragraph_level_no_spans_train.json
- split: val
path: measeval_paragraph_level_no_spans_val.json
- split: test
path: measeval_paragraph_level_no_spans_test.json
- config_name: bm
data_files:
- split: train
path: bm_paragraph_level_no_spans_train.json
- split: val
path: bm_paragraph_level_no_spans_val.json
- split: test
path: bm_paragraph_level_no_spans_test.json
- config_name: msp
data_files:
- split: train
path: msp_paragraph_level_no_spans_train.json
- split: val
path: msp_paragraph_level_no_spans_val.json
- split: test
path: msp_paragraph_level_no_spans_test.json
- config_name: all
data_files:
- split: train
path:
- measeval_paragraph_level_no_spans_train.json
- bm_paragraph_level_no_spans_train.json
- msp_paragraph_level_no_spans_train.json
- split: val
path:
- measeval_paragraph_level_no_spans_val.json
- bm_paragraph_level_no_spans_val.json
- msp_paragraph_level_no_spans_val.json
- split: test
path:
- measeval_paragraph_level_no_spans_test.json
- bm_paragraph_level_no_spans_test.json
- msp_paragraph_level_no_spans_test.json
task_categories:
- token-classification
language:
- en
tags:
- chemistry
- biology
size_categories:
- n<1K
---
# A Multi-Domain Corpus for Measurement Extraction (Seq2Seq variant)
A detailed description of corpus creation can be found [here](https://aclanthology.org/2023.bionlp-1.1/).
This dataset contains the training and validation and test data for each of the three datasets `measeval`, `bm`, and `msp`. The `measeval`, and `msp` datasets were adapted from the [MeasEval (Harper et al., 2021)](https://github.com/harperco/MeasEval) and the [Material Synthesis Procedual (Mysore et al., 2019)](https://github.com/olivettigroup/annotated-materials-syntheses) corpus respectively.
This repository aggregates extraction to paragraph-level for msp and measeval. Labels are given in json-format as preparation for seq2seq training.
# How to load
```python
from datasets import load_dataset
# Only train, all domains
train_dataset = load_dataset("liy140/multidomain-measextract-corpus", "all", split="train")
# All measeval data
measeval_dataset = load_dataset("liy140/multidomain-measextract-corpus", "measeval", split=["train", "val", "test"])
```
# Create Seq2Seq samples
One standard instruction is given, such that such a prompt can be generated by merging text and extraction columns:
```
### Instruction
You are an expert at extracting quantity, units and their related context from text.
Given a paragraph below identify each quantity and its related unit and related context, i.e. the measured entity and measured property if they exist.
### Paragraph
The H/H+ transition in the MC09 model occurs near 1.4Rp. If we replace the gray approximation with the full solar spectrum in this model, the H/H+ transition moves higher to 2–3Rp. This is because photons with different energies penetrate to different depths in the atmosphere, extending the heating profile in altitude around the heating peak. This is why the temperature at the 30 nbar level in the C2 model is 3800 K and not 1000 K. In order to test the effect of higher temperatures in the lower thermosphere, we extended the MC09 model to p0 = 1 μbar (with T0 = 1300 K) and again used the full solar spectrum for heating and ionization. With these conditions, the H/H+ transition moves up to 3.4Rp, in agreement with the C2 model. We conclude that the unrealistic boundary conditions and the gray approximation adopted by Murray-Clay et al. (2009) and Guo (2011) lead to an underestimated overall density of H and an overestimated ion fraction. Thus their density profiles yield a H Lyman α transit depth of the order of 2–3% i.e., not significantly higher than the visible transit depth.
### Extractions
[
{
"docId": "S0019103513005058-3154",
"measured_entity": "Soluble sulfate",
"measured_property": null,
"quantity": "1.3 \u00b1 0.5 wt.%",
"unit": "wt.%"
},
{
"docId": "S0019103513005058-3154",
"measured_entity": "soil",
"measured_property": "perchlorate (ClO4-)",
"quantity": "\u223c0.5 wt.%",
"unit": "wt.%"
},
{
"docId": "S0019103513005058-3154",
"measured_entity": "perchlorate-sensitive electrode",
"measured_property": "sensitive to nitrate",
"quantity": "1000 times",
"unit": "times"
},
{
"docId": "S0019103513005058-3154",
"measured_entity": "Viking 1 and Viking 2 landing sites",
"measured_property": "perchlorate",
"quantity": "\u2a7d1.6%",
"unit": "%"
},
{
"docId": "S0019103513005058-3154",
"measured_entity": "martian meteorite EETA79001",
"measured_property": "Native perchlorate",
"quantity": "<1 ppm by mass",
"unit": "ppm by mass"
}
]
```
# Citation
```
@inproceedings{li-etal-2023-multi-source,
title = "Multi-Source (Pre-)Training for Cross-Domain Measurement, Unit and Context Extraction",
author = "Li, Yueling and
Martschat, Sebastian and
Ponzetto, Simone Paolo",
booktitle = "The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.bionlp-1.1",
pages = "1--25",
abstract = "We present a cross-domain approach for automated measurement and context extraction based on pre-trained language models. We construct a multi-source, multi-domain corpus and train an end-to-end extraction pipeline. We then apply multi-source task-adaptive pre-training and fine-tuning to benchmark the cross-domain generalization capability of our model. Further, we conceptualize and apply a task-specific error analysis and derive insights for future work. Our results suggest that multi-source training leads to the best overall results, while single-source training yields the best results for the respective individual domain. While our setup is successful at extracting quantity values and units, more research is needed to improve the extraction of contextual entities. We make the cross-domain corpus used in this work available online.",
}
```
|
miojizzy/mhr_recognize_datasets | 2023-09-12T06:57:09.000Z | [
"region:us"
] | miojizzy | Monster Hunter Rise images and labels. | null | null | 0 | 58 | Entry not found |
msinankhan1/India_Tax_FAQs | 2023-09-14T12:12:26.000Z | [
"region:us"
] | msinankhan1 | null | null | null | 0 | 58 | Entry not found |
deven367/babylm-100M-children-stories | 2023-09-16T05:17:25.000Z | [
"region:us"
] | deven367 | null | null | null | 0 | 58 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 17676869
num_examples: 76758
- name: valid
num_bytes: 1425137
num_examples: 5996
- name: test
num_bytes: 1804421
num_examples: 7959
download_size: 12749002
dataset_size: 20906427
---
# Dataset Card for "babylm-100M-children-stories"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
joemaatouk/EZPass-Dataset | 2023-09-28T15:55:43.000Z | [
"region:us"
] | joemaatouk | Dataset featuring key metadata from EZ-Pass invoices | @article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
} | null | 0 | 58 | Entry not found |
Nbardy/Synthetic-V5 | 2023-09-28T22:58:05.000Z | [
"region:us"
] | Nbardy | null | null | null | 0 | 58 | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 33446850985.02
num_examples: 30748
download_size: 10273614700
dataset_size: 33446850985.02
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Synthetic-V5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AustinMcMike/steve_jobs_quotes_2 | 2023-10-02T18:44:48.000Z | [
"region:us"
] | AustinMcMike | null | null | null | 0 | 58 | |
europarl_bilingual | 2022-11-03T16:31:58.000Z | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:translation",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language... | null | A parallel corpus extracted from the European Parliament web site by Philipp Koehn (University of Edinburgh). The main intended use is to aid statistical machine translation research. | null | null | 8 | 57 | ---
annotations_creators:
- found
language_creators:
- found
language:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- hu
- it
- lt
- lv
- nl
- pl
- pt
- ro
- sk
- sl
- sv
license:
- unknown
multilinguality:
- translation
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- translation
task_ids: []
paperswithcode_id: null
pretty_name: europarl-bilingual
dataset_info:
- config_name: bg-cs
features:
- name: translation
dtype:
translation:
languages:
- bg
- cs
splits:
- name: train
num_bytes: 175372131
num_examples: 402657
download_size: 77543700
dataset_size: 175372131
- config_name: bg-da
features:
- name: translation
dtype:
translation:
languages:
- bg
- da
splits:
- name: train
num_bytes: 169901335
num_examples: 393449
download_size: 161209111
dataset_size: 169901335
- config_name: bg-de
features:
- name: translation
dtype:
translation:
languages:
- bg
- de
splits:
- name: train
num_bytes: 179830695
num_examples: 393298
download_size: 173031810
dataset_size: 179830695
- config_name: bg-el
features:
- name: translation
dtype:
translation:
languages:
- bg
- el
splits:
- name: train
num_bytes: 232659899
num_examples: 377341
download_size: 164911397
dataset_size: 232659899
- config_name: bg-en
features:
- name: translation
dtype:
translation:
languages:
- bg
- en
splits:
- name: train
num_bytes: 175002243
num_examples: 408290
download_size: 175210123
dataset_size: 175002243
- config_name: bg-es
features:
- name: translation
dtype:
translation:
languages:
- bg
- es
splits:
- name: train
num_bytes: 175608108
num_examples: 388226
download_size: 167299422
dataset_size: 175608108
- config_name: bg-et
features:
- name: translation
dtype:
translation:
languages:
- bg
- et
splits:
- name: train
num_bytes: 169828337
num_examples: 400712
download_size: 74382173
dataset_size: 169828337
- config_name: bg-fi
features:
- name: translation
dtype:
translation:
languages:
- bg
- fi
splits:
- name: train
num_bytes: 173345926
num_examples: 396624
download_size: 159647184
dataset_size: 173345926
- config_name: bg-fr
features:
- name: translation
dtype:
translation:
languages:
- bg
- fr
splits:
- name: train
num_bytes: 179518097
num_examples: 393644
download_size: 173290519
dataset_size: 179518097
- config_name: bg-hu
features:
- name: translation
dtype:
translation:
languages:
- bg
- hu
splits:
- name: train
num_bytes: 173346636
num_examples: 382773
download_size: 77741287
dataset_size: 173346636
- config_name: bg-it
features:
- name: translation
dtype:
translation:
languages:
- bg
- it
splits:
- name: train
num_bytes: 178372027
num_examples: 377822
download_size: 167706004
dataset_size: 178372027
- config_name: bg-lt
features:
- name: translation
dtype:
translation:
languages:
- bg
- lt
splits:
- name: train
num_bytes: 168242178
num_examples: 392554
download_size: 74614251
dataset_size: 168242178
- config_name: bg-lv
features:
- name: translation
dtype:
translation:
languages:
- bg
- lv
splits:
- name: train
num_bytes: 173267674
num_examples: 398355
download_size: 74564662
dataset_size: 173267674
- config_name: bg-nl
features:
- name: translation
dtype:
translation:
languages:
- bg
- nl
splits:
- name: train
num_bytes: 174737553
num_examples: 388273
download_size: 170765314
dataset_size: 174737553
- config_name: bg-pl
features:
- name: translation
dtype:
translation:
languages:
- bg
- pl
splits:
- name: train
num_bytes: 175528692
num_examples: 395269
download_size: 78179477
dataset_size: 175528692
- config_name: bg-pt
features:
- name: translation
dtype:
translation:
languages:
- bg
- pt
splits:
- name: train
num_bytes: 174578955
num_examples: 388972
download_size: 170237403
dataset_size: 174578955
- config_name: bg-ro
features:
- name: translation
dtype:
translation:
languages:
- bg
- ro
splits:
- name: train
num_bytes: 175218264
num_examples: 389381
download_size: 60489220
dataset_size: 175218264
- config_name: bg-sk
features:
- name: translation
dtype:
translation:
languages:
- bg
- sk
splits:
- name: train
num_bytes: 170977227
num_examples: 393815
download_size: 77065166
dataset_size: 170977227
- config_name: bg-sl
features:
- name: translation
dtype:
translation:
languages:
- bg
- sl
splits:
- name: train
num_bytes: 159371534
num_examples: 380231
download_size: 72025259
dataset_size: 159371534
- config_name: bg-sv
features:
- name: translation
dtype:
translation:
languages:
- bg
- sv
splits:
- name: train
num_bytes: 172562375
num_examples: 398236
download_size: 160015782
dataset_size: 172562375
- config_name: cs-da
features:
- name: translation
dtype:
translation:
languages:
- cs
- da
splits:
- name: train
num_bytes: 189814103
num_examples: 618055
download_size: 174829844
dataset_size: 189814103
- config_name: cs-de
features:
- name: translation
dtype:
translation:
languages:
- cs
- de
splits:
- name: train
num_bytes: 187747987
num_examples: 568589
download_size: 186471876
dataset_size: 187747987
- config_name: cs-el
features:
- name: translation
dtype:
translation:
languages:
- cs
- el
splits:
- name: train
num_bytes: 289333860
num_examples: 599489
download_size: 178443921
dataset_size: 289333860
- config_name: cs-en
features:
- name: translation
dtype:
translation:
languages:
- cs
- en
splits:
- name: train
num_bytes: 196378085
num_examples: 647095
download_size: 188756690
dataset_size: 196378085
- config_name: cs-es
features:
- name: translation
dtype:
translation:
languages:
- cs
- es
splits:
- name: train
num_bytes: 201972536
num_examples: 619774
download_size: 180848885
dataset_size: 201972536
- config_name: cs-et
features:
- name: translation
dtype:
translation:
languages:
- cs
- et
splits:
- name: train
num_bytes: 189852839
num_examples: 636512
download_size: 87913231
dataset_size: 189852839
- config_name: cs-fi
features:
- name: translation
dtype:
translation:
languages:
- cs
- fi
splits:
- name: train
num_bytes: 193370836
num_examples: 619320
download_size: 173216683
dataset_size: 193370836
- config_name: cs-fr
features:
- name: translation
dtype:
translation:
languages:
- cs
- fr
splits:
- name: train
num_bytes: 207043213
num_examples: 628200
download_size: 186873132
dataset_size: 207043213
- config_name: cs-hu
features:
- name: translation
dtype:
translation:
languages:
- cs
- hu
splits:
- name: train
num_bytes: 201392624
num_examples: 616160
download_size: 91341961
dataset_size: 201392624
- config_name: cs-it
features:
- name: translation
dtype:
translation:
languages:
- cs
- it
splits:
- name: train
num_bytes: 203150534
num_examples: 607017
download_size: 181266237
dataset_size: 203150534
- config_name: cs-lt
features:
- name: translation
dtype:
translation:
languages:
- cs
- lt
splits:
- name: train
num_bytes: 189504979
num_examples: 624292
download_size: 88260876
dataset_size: 189504979
- config_name: cs-lv
features:
- name: translation
dtype:
translation:
languages:
- cs
- lv
splits:
- name: train
num_bytes: 193888740
num_examples: 627873
download_size: 88126869
dataset_size: 193888740
- config_name: cs-nl
features:
- name: translation
dtype:
translation:
languages:
- cs
- nl
splits:
- name: train
num_bytes: 199512564
num_examples: 618414
download_size: 184381636
dataset_size: 199512564
- config_name: cs-pl
features:
- name: translation
dtype:
translation:
languages:
- cs
- pl
splits:
- name: train
num_bytes: 197967454
num_examples: 621387
download_size: 91806300
dataset_size: 197967454
- config_name: cs-pt
features:
- name: translation
dtype:
translation:
languages:
- cs
- pt
splits:
- name: train
num_bytes: 197178140
num_examples: 609729
download_size: 183745721
dataset_size: 197178140
- config_name: cs-ro
features:
- name: translation
dtype:
translation:
languages:
- cs
- ro
splits:
- name: train
num_bytes: 127321661
num_examples: 392085
download_size: 73245197
dataset_size: 127321661
- config_name: cs-sk
features:
- name: translation
dtype:
translation:
languages:
- cs
- sk
splits:
- name: train
num_bytes: 196186957
num_examples: 636128
download_size: 90623958
dataset_size: 196186957
- config_name: cs-sl
features:
- name: translation
dtype:
translation:
languages:
- cs
- sl
splits:
- name: train
num_bytes: 179909545
num_examples: 611624
download_size: 85558670
dataset_size: 179909545
- config_name: cs-sv
features:
- name: translation
dtype:
translation:
languages:
- cs
- sv
splits:
- name: train
num_bytes: 194656792
num_examples: 631544
download_size: 173672259
dataset_size: 194656792
- config_name: da-de
features:
- name: translation
dtype:
translation:
languages:
- da
- de
splits:
- name: train
num_bytes: 624355083
num_examples: 1928414
download_size: 276778385
dataset_size: 624355083
- config_name: da-el
features:
- name: translation
dtype:
translation:
languages:
- da
- el
splits:
- name: train
num_bytes: 604008313
num_examples: 1280579
download_size: 265542591
dataset_size: 604008313
- config_name: da-en
features:
- name: translation
dtype:
translation:
languages:
- da
- en
splits:
- name: train
num_bytes: 612701093
num_examples: 1991647
download_size: 279497322
dataset_size: 612701093
- config_name: da-es
features:
- name: translation
dtype:
translation:
languages:
- da
- es
splits:
- name: train
num_bytes: 631311642
num_examples: 1943931
download_size: 271357896
dataset_size: 631311642
- config_name: da-et
features:
- name: translation
dtype:
translation:
languages:
- da
- et
splits:
- name: train
num_bytes: 182908097
num_examples: 635018
download_size: 171538628
dataset_size: 182908097
- config_name: da-fi
features:
- name: translation
dtype:
translation:
languages:
- da
- fi
splits:
- name: train
num_bytes: 599820497
num_examples: 1917260
download_size: 263430295
dataset_size: 599820497
- config_name: da-fr
features:
- name: translation
dtype:
translation:
languages:
- da
- fr
splits:
- name: train
num_bytes: 658108095
num_examples: 1992590
download_size: 277504353
dataset_size: 658108095
- config_name: da-hu
features:
- name: translation
dtype:
translation:
languages:
- da
- hu
splits:
- name: train
num_bytes: 196114245
num_examples: 617519
download_size: 174981657
dataset_size: 196114245
- config_name: da-it
features:
- name: translation
dtype:
translation:
languages:
- da
- it
splits:
- name: train
num_bytes: 630400040
num_examples: 1876703
download_size: 271654671
dataset_size: 630400040
- config_name: da-lt
features:
- name: translation
dtype:
translation:
languages:
- da
- lt
splits:
- name: train
num_bytes: 184071192
num_examples: 614923
download_size: 171931855
dataset_size: 184071192
- config_name: da-lv
features:
- name: translation
dtype:
translation:
languages:
- da
- lv
splits:
- name: train
num_bytes: 188638250
num_examples: 627809
download_size: 171781368
dataset_size: 188638250
- config_name: da-nl
features:
- name: translation
dtype:
translation:
languages:
- da
- nl
splits:
- name: train
num_bytes: 634339405
num_examples: 1987498
download_size: 275140635
dataset_size: 634339405
- config_name: da-pl
features:
- name: translation
dtype:
translation:
languages:
- da
- pl
splits:
- name: train
num_bytes: 193218656
num_examples: 642544
download_size: 175344681
dataset_size: 193218656
- config_name: da-pt
features:
- name: translation
dtype:
translation:
languages:
- da
- pt
splits:
- name: train
num_bytes: 631413013
num_examples: 1930454
download_size: 274286241
dataset_size: 631413013
- config_name: da-ro
features:
- name: translation
dtype:
translation:
languages:
- da
- ro
splits:
- name: train
num_bytes: 124974166
num_examples: 388156
download_size: 156965207
dataset_size: 124974166
- config_name: da-sk
features:
- name: translation
dtype:
translation:
languages:
- da
- sk
splits:
- name: train
num_bytes: 190277240
num_examples: 621907
download_size: 174378230
dataset_size: 190277240
- config_name: da-sl
features:
- name: translation
dtype:
translation:
languages:
- da
- sl
splits:
- name: train
num_bytes: 173968152
num_examples: 595944
download_size: 169356730
dataset_size: 173968152
- config_name: da-sv
features:
- name: translation
dtype:
translation:
languages:
- da
- sv
splits:
- name: train
num_bytes: 567189130
num_examples: 1871171
download_size: 263342660
dataset_size: 567189130
- config_name: de-el
features:
- name: translation
dtype:
translation:
languages:
- de
- el
splits:
- name: train
num_bytes: 603303137
num_examples: 1223026
download_size: 277232265
dataset_size: 603303137
- config_name: de-en
features:
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: train
num_bytes: 641864487
num_examples: 1961119
download_size: 291376506
dataset_size: 641864487
- config_name: de-es
features:
- name: translation
dtype:
translation:
languages:
- de
- es
splits:
- name: train
num_bytes: 651057814
num_examples: 1887879
download_size: 283096221
dataset_size: 651057814
- config_name: de-et
features:
- name: translation
dtype:
translation:
languages:
- de
- et
splits:
- name: train
num_bytes: 181554876
num_examples: 578248
download_size: 183218377
dataset_size: 181554876
- config_name: de-fi
features:
- name: translation
dtype:
translation:
languages:
- de
- fi
splits:
- name: train
num_bytes: 621960938
num_examples: 1871185
download_size: 275244245
dataset_size: 621960938
- config_name: de-fr
features:
- name: translation
dtype:
translation:
languages:
- de
- fr
splits:
- name: train
num_bytes: 680963340
num_examples: 1942666
download_size: 289325334
dataset_size: 680963340
- config_name: de-hu
features:
- name: translation
dtype:
translation:
languages:
- de
- hu
splits:
- name: train
num_bytes: 193068884
num_examples: 563571
download_size: 186625855
dataset_size: 193068884
- config_name: de-it
features:
- name: translation
dtype:
translation:
languages:
- de
- it
splits:
- name: train
num_bytes: 653857504
num_examples: 1832989
download_size: 283411719
dataset_size: 653857504
- config_name: de-lt
features:
- name: translation
dtype:
translation:
languages:
- de
- lt
splits:
- name: train
num_bytes: 182429076
num_examples: 565892
download_size: 183552115
dataset_size: 182429076
- config_name: de-lv
features:
- name: translation
dtype:
translation:
languages:
- de
- lv
splits:
- name: train
num_bytes: 186374102
num_examples: 573226
download_size: 183437158
dataset_size: 186374102
- config_name: de-nl
features:
- name: translation
dtype:
translation:
languages:
- de
- nl
splits:
- name: train
num_bytes: 655711533
num_examples: 1934111
download_size: 286849380
dataset_size: 655711533
- config_name: de-pl
features:
- name: translation
dtype:
translation:
languages:
- de
- pl
splits:
- name: train
num_bytes: 189642761
num_examples: 579166
download_size: 187004630
dataset_size: 189642761
- config_name: de-pt
features:
- name: translation
dtype:
translation:
languages:
- de
- pt
splits:
- name: train
num_bytes: 654723289
num_examples: 1884176
download_size: 286068045
dataset_size: 654723289
- config_name: de-ro
features:
- name: translation
dtype:
translation:
languages:
- de
- ro
splits:
- name: train
num_bytes: 133686126
num_examples: 385663
download_size: 168794955
dataset_size: 133686126
- config_name: de-sk
features:
- name: translation
dtype:
translation:
languages:
- de
- sk
splits:
- name: train
num_bytes: 187484752
num_examples: 569381
download_size: 186001546
dataset_size: 187484752
- config_name: de-sl
features:
- name: translation
dtype:
translation:
languages:
- de
- sl
splits:
- name: train
num_bytes: 171891826
num_examples: 546212
download_size: 180994167
dataset_size: 171891826
- config_name: de-sv
features:
- name: translation
dtype:
translation:
languages:
- de
- sv
splits:
- name: train
num_bytes: 590635137
num_examples: 1842026
download_size: 275145356
dataset_size: 590635137
- config_name: el-en
features:
- name: translation
dtype:
translation:
languages:
- el
- en
splits:
- name: train
num_bytes: 606689426
num_examples: 1292180
download_size: 279571396
dataset_size: 606689426
- config_name: el-es
features:
- name: translation
dtype:
translation:
languages:
- el
- es
splits:
- name: train
num_bytes: 621773509
num_examples: 1272383
download_size: 271592910
dataset_size: 621773509
- config_name: el-et
features:
- name: translation
dtype:
translation:
languages:
- el
- et
splits:
- name: train
num_bytes: 282330974
num_examples: 599915
download_size: 175257825
dataset_size: 282330974
- config_name: el-fi
features:
- name: translation
dtype:
translation:
languages:
- el
- fi
splits:
- name: train
num_bytes: 583209381
num_examples: 1227612
download_size: 263682672
dataset_size: 583209381
- config_name: el-fr
features:
- name: translation
dtype:
translation:
languages:
- el
- fr
splits:
- name: train
num_bytes: 637660521
num_examples: 1290796
download_size: 277664049
dataset_size: 637660521
- config_name: el-hu
features:
- name: translation
dtype:
translation:
languages:
- el
- hu
splits:
- name: train
num_bytes: 293591416
num_examples: 586250
download_size: 178679940
dataset_size: 293591416
- config_name: el-it
features:
- name: translation
dtype:
translation:
languages:
- el
- it
splits:
- name: train
num_bytes: 619754868
num_examples: 1231222
download_size: 271890467
dataset_size: 619754868
- config_name: el-lt
features:
- name: translation
dtype:
translation:
languages:
- el
- lt
splits:
- name: train
num_bytes: 281773875
num_examples: 590850
download_size: 175584581
dataset_size: 281773875
- config_name: el-lv
features:
- name: translation
dtype:
translation:
languages:
- el
- lv
splits:
- name: train
num_bytes: 287747485
num_examples: 596929
download_size: 175479598
dataset_size: 287747485
- config_name: el-nl
features:
- name: translation
dtype:
translation:
languages:
- el
- nl
splits:
- name: train
num_bytes: 619747333
num_examples: 1277297
download_size: 275234928
dataset_size: 619747333
- config_name: el-pl
features:
- name: translation
dtype:
translation:
languages:
- el
- pl
splits:
- name: train
num_bytes: 291216179
num_examples: 591069
download_size: 179121800
dataset_size: 291216179
- config_name: el-pt
features:
- name: translation
dtype:
translation:
languages:
- el
- pt
splits:
- name: train
num_bytes: 619089974
num_examples: 1261188
download_size: 274510323
dataset_size: 619089974
- config_name: el-ro
features:
- name: translation
dtype:
translation:
languages:
- el
- ro
splits:
- name: train
num_bytes: 186445257
num_examples: 372839
download_size: 160638758
dataset_size: 186445257
- config_name: el-sk
features:
- name: translation
dtype:
translation:
languages:
- el
- sk
splits:
- name: train
num_bytes: 290180513
num_examples: 600684
download_size: 178030033
dataset_size: 290180513
- config_name: el-sl
features:
- name: translation
dtype:
translation:
languages:
- el
- sl
splits:
- name: train
num_bytes: 269700597
num_examples: 579109
download_size: 172981018
dataset_size: 269700597
- config_name: el-sv
features:
- name: translation
dtype:
translation:
languages:
- el
- sv
splits:
- name: train
num_bytes: 598841855
num_examples: 1273743
download_size: 264310725
dataset_size: 598841855
- config_name: en-es
features:
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: train
num_bytes: 645806091
num_examples: 2009073
download_size: 285275775
dataset_size: 645806091
- config_name: en-et
features:
- name: translation
dtype:
translation:
languages:
- en
- et
splits:
- name: train
num_bytes: 190057019
num_examples: 651236
download_size: 185547113
dataset_size: 190057019
- config_name: en-fi
features:
- name: translation
dtype:
translation:
languages:
- en
- fi
splits:
- name: train
num_bytes: 612796933
num_examples: 1969624
download_size: 277526569
dataset_size: 612796933
- config_name: en-fr
features:
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: train
num_bytes: 674922213
num_examples: 2051014
download_size: 291576418
dataset_size: 674922213
- config_name: en-hu
features:
- name: translation
dtype:
translation:
languages:
- en
- hu
splits:
- name: train
num_bytes: 200219937
num_examples: 625178
download_size: 189011893
dataset_size: 200219937
- config_name: en-it
features:
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: train
num_bytes: 649121845
num_examples: 1946253
download_size: 285912672
dataset_size: 649121845
- config_name: en-lt
features:
- name: translation
dtype:
translation:
languages:
- en
- lt
splits:
- name: train
num_bytes: 188689136
num_examples: 634284
download_size: 185983375
dataset_size: 188689136
- config_name: en-lv
features:
- name: translation
dtype:
translation:
languages:
- en
- lv
splits:
- name: train
num_bytes: 193229251
num_examples: 639318
download_size: 185755567
dataset_size: 193229251
- config_name: en-nl
features:
- name: translation
dtype:
translation:
languages:
- en
- nl
splits:
- name: train
num_bytes: 648639286
num_examples: 2027447
download_size: 289379311
dataset_size: 648639286
- config_name: en-pl
features:
- name: translation
dtype:
translation:
languages:
- en
- pl
splits:
- name: train
num_bytes: 197111400
num_examples: 631160
download_size: 189526719
dataset_size: 197111400
- config_name: en-pt
features:
- name: translation
dtype:
translation:
languages:
- en
- pt
splits:
- name: train
num_bytes: 649484557
num_examples: 2002943
download_size: 288280201
dataset_size: 649484557
- config_name: en-ro
features:
- name: translation
dtype:
translation:
languages:
- en
- ro
splits:
- name: train
num_bytes: 127546377
num_examples: 400356
download_size: 170919568
dataset_size: 127546377
- config_name: en-sk
features:
- name: translation
dtype:
translation:
languages:
- en
- sk
splits:
- name: train
num_bytes: 194301334
num_examples: 639958
download_size: 188348297
dataset_size: 194301334
- config_name: en-sl
features:
- name: translation
dtype:
translation:
languages:
- en
- sl
splits:
- name: train
num_bytes: 179662136
num_examples: 624803
download_size: 182965262
dataset_size: 179662136
- config_name: en-sv
features:
- name: translation
dtype:
translation:
languages:
- en
- sv
splits:
- name: train
num_bytes: 583167767
num_examples: 1892723
download_size: 277758290
dataset_size: 583167767
- config_name: es-et
features:
- name: translation
dtype:
translation:
languages:
- es
- et
splits:
- name: train
num_bytes: 194077194
num_examples: 618350
download_size: 177610241
dataset_size: 194077194
- config_name: es-fi
features:
- name: translation
dtype:
translation:
languages:
- es
- fi
splits:
- name: train
num_bytes: 624352744
num_examples: 1901596
download_size: 269239484
dataset_size: 624352744
- config_name: es-fr
features:
- name: translation
dtype:
translation:
languages:
- es
- fr
splits:
- name: train
num_bytes: 686124508
num_examples: 1982990
download_size: 283235952
dataset_size: 686124508
- config_name: es-hu
features:
- name: translation
dtype:
translation:
languages:
- es
- hu
splits:
- name: train
num_bytes: 207128226
num_examples: 604007
download_size: 181057656
dataset_size: 207128226
- config_name: es-it
features:
- name: translation
dtype:
translation:
languages:
- es
- it
splits:
- name: train
num_bytes: 659832078
num_examples: 1880982
download_size: 277595675
dataset_size: 659832078
- config_name: es-lt
features:
- name: translation
dtype:
translation:
languages:
- es
- lt
splits:
- name: train
num_bytes: 195424327
num_examples: 611082
download_size: 178003980
dataset_size: 195424327
- config_name: es-lv
features:
- name: translation
dtype:
translation:
languages:
- es
- lv
splits:
- name: train
num_bytes: 199870901
num_examples: 615496
download_size: 177847154
dataset_size: 199870901
- config_name: es-nl
features:
- name: translation
dtype:
translation:
languages:
- es
- nl
splits:
- name: train
num_bytes: 659669649
num_examples: 1954351
download_size: 281116315
dataset_size: 659669649
- config_name: es-pl
features:
- name: translation
dtype:
translation:
languages:
- es
- pl
splits:
- name: train
num_bytes: 203960308
num_examples: 609297
download_size: 181528675
dataset_size: 203960308
- config_name: es-pt
features:
- name: translation
dtype:
translation:
languages:
- es
- pt
splits:
- name: train
num_bytes: 660610724
num_examples: 1933321
download_size: 280106119
dataset_size: 660610724
- config_name: es-ro
features:
- name: translation
dtype:
translation:
languages:
- es
- ro
splits:
- name: train
num_bytes: 132099300
num_examples: 387653
download_size: 163044165
dataset_size: 132099300
- config_name: es-sk
features:
- name: translation
dtype:
translation:
languages:
- es
- sk
splits:
- name: train
num_bytes: 201711884
num_examples: 619027
download_size: 180405877
dataset_size: 201711884
- config_name: es-sl
features:
- name: translation
dtype:
translation:
languages:
- es
- sl
splits:
- name: train
num_bytes: 185526475
num_examples: 599168
download_size: 175277856
dataset_size: 185526475
- config_name: es-sv
features:
- name: translation
dtype:
translation:
languages:
- es
- sv
splits:
- name: train
num_bytes: 594313079
num_examples: 1826855
download_size: 269509656
dataset_size: 594313079
- config_name: et-fi
features:
- name: translation
dtype:
translation:
languages:
- et
- fi
splits:
- name: train
num_bytes: 186411056
num_examples: 620939
download_size: 169999062
dataset_size: 186411056
- config_name: et-fr
features:
- name: translation
dtype:
translation:
languages:
- et
- fr
splits:
- name: train
num_bytes: 199983753
num_examples: 630126
download_size: 183656005
dataset_size: 199983753
- config_name: et-hu
features:
- name: translation
dtype:
translation:
languages:
- et
- hu
splits:
- name: train
num_bytes: 195505472
num_examples: 628044
download_size: 88087464
dataset_size: 195505472
- config_name: et-it
features:
- name: translation
dtype:
translation:
languages:
- et
- it
splits:
- name: train
num_bytes: 195809060
num_examples: 607088
download_size: 178033859
dataset_size: 195809060
- config_name: et-lt
features:
- name: translation
dtype:
translation:
languages:
- et
- lt
splits:
- name: train
num_bytes: 181591116
num_examples: 622003
download_size: 85049307
dataset_size: 181591116
- config_name: et-lv
features:
- name: translation
dtype:
translation:
languages:
- et
- lv
splits:
- name: train
num_bytes: 186830733
num_examples: 637468
download_size: 84838432
dataset_size: 186830733
- config_name: et-nl
features:
- name: translation
dtype:
translation:
languages:
- et
- nl
splits:
- name: train
num_bytes: 192674741
num_examples: 621150
download_size: 181153226
dataset_size: 192674741
- config_name: et-pl
features:
- name: translation
dtype:
translation:
languages:
- et
- pl
splits:
- name: train
num_bytes: 191037236
num_examples: 639046
download_size: 88518099
dataset_size: 191037236
- config_name: et-pt
features:
- name: translation
dtype:
translation:
languages:
- et
- pt
splits:
- name: train
num_bytes: 191956598
num_examples: 616238
download_size: 180565606
dataset_size: 191956598
- config_name: et-ro
features:
- name: translation
dtype:
translation:
languages:
- et
- ro
splits:
- name: train
num_bytes: 122191834
num_examples: 389087
download_size: 70103283
dataset_size: 122191834
- config_name: et-sk
features:
- name: translation
dtype:
translation:
languages:
- et
- sk
splits:
- name: train
num_bytes: 188728692
num_examples: 634168
download_size: 87465164
dataset_size: 188728692
- config_name: et-sl
features:
- name: translation
dtype:
translation:
languages:
- et
- sl
splits:
- name: train
num_bytes: 172379502
num_examples: 609731
download_size: 82340544
dataset_size: 172379502
- config_name: et-sv
features:
- name: translation
dtype:
translation:
languages:
- et
- sv
splits:
- name: train
num_bytes: 189514511
num_examples: 656646
download_size: 170410673
dataset_size: 189514511
- config_name: fi-fr
features:
- name: translation
dtype:
translation:
languages:
- fi
- fr
splits:
- name: train
num_bytes: 658941046
num_examples: 1964126
download_size: 275801815
dataset_size: 658941046
- config_name: fi-hu
features:
- name: translation
dtype:
translation:
languages:
- fi
- hu
splits:
- name: train
num_bytes: 199866442
num_examples: 606348
download_size: 173436552
dataset_size: 199866442
- config_name: fi-it
features:
- name: translation
dtype:
translation:
languages:
- fi
- it
splits:
- name: train
num_bytes: 630203540
num_examples: 1845203
download_size: 269923911
dataset_size: 630203540
- config_name: fi-lt
features:
- name: translation
dtype:
translation:
languages:
- fi
- lt
splits:
- name: train
num_bytes: 187759286
num_examples: 613113
download_size: 170349480
dataset_size: 187759286
- config_name: fi-lv
features:
- name: translation
dtype:
translation:
languages:
- fi
- lv
splits:
- name: train
num_bytes: 192467707
num_examples: 616816
download_size: 170245682
dataset_size: 192467707
- config_name: fi-nl
features:
- name: translation
dtype:
translation:
languages:
- fi
- nl
splits:
- name: train
num_bytes: 629656948
num_examples: 1940808
download_size: 273354291
dataset_size: 629656948
- config_name: fi-pl
features:
- name: translation
dtype:
translation:
languages:
- fi
- pl
splits:
- name: train
num_bytes: 196692739
num_examples: 612689
download_size: 173878256
dataset_size: 196692739
- config_name: fi-pt
features:
- name: translation
dtype:
translation:
languages:
- fi
- pt
splits:
- name: train
num_bytes: 625813096
num_examples: 1885062
download_size: 272449208
dataset_size: 625813096
- config_name: fi-ro
features:
- name: translation
dtype:
translation:
languages:
- fi
- ro
splits:
- name: train
num_bytes: 128424133
num_examples: 391430
download_size: 155413895
dataset_size: 128424133
- config_name: fi-sk
features:
- name: translation
dtype:
translation:
languages:
- fi
- sk
splits:
- name: train
num_bytes: 194407846
num_examples: 623686
download_size: 172774950
dataset_size: 194407846
- config_name: fi-sl
features:
- name: translation
dtype:
translation:
languages:
- fi
- sl
splits:
- name: train
num_bytes: 177582459
num_examples: 596661
download_size: 167734483
dataset_size: 177582459
- config_name: fi-sv
features:
- name: translation
dtype:
translation:
languages:
- fi
- sv
splits:
- name: train
num_bytes: 590589773
num_examples: 1883314
download_size: 262138250
dataset_size: 590589773
- config_name: fr-hu
features:
- name: translation
dtype:
translation:
languages:
- fr
- hu
splits:
- name: train
num_bytes: 213345700
num_examples: 615791
download_size: 187084192
dataset_size: 213345700
- config_name: fr-it
features:
- name: translation
dtype:
translation:
languages:
- fr
- it
splits:
- name: train
num_bytes: 694854791
num_examples: 1943673
download_size: 283931275
dataset_size: 694854791
- config_name: fr-lt
features:
- name: translation
dtype:
translation:
languages:
- fr
- lt
splits:
- name: train
num_bytes: 200610624
num_examples: 620660
download_size: 184000557
dataset_size: 200610624
- config_name: fr-lv
features:
- name: translation
dtype:
translation:
languages:
- fr
- lv
splits:
- name: train
num_bytes: 205814878
num_examples: 626280
download_size: 183883161
dataset_size: 205814878
- config_name: fr-nl
features:
- name: translation
dtype:
translation:
languages:
- fr
- nl
splits:
- name: train
num_bytes: 693784423
num_examples: 2029551
download_size: 287389308
dataset_size: 693784423
- config_name: fr-pl
features:
- name: translation
dtype:
translation:
languages:
- fr
- pl
splits:
- name: train
num_bytes: 210001183
num_examples: 621402
download_size: 187532501
dataset_size: 210001183
- config_name: fr-pt
features:
- name: translation
dtype:
translation:
languages:
- fr
- pt
splits:
- name: train
num_bytes: 689789351
num_examples: 1980132
download_size: 286436517
dataset_size: 689789351
- config_name: fr-ro
features:
- name: translation
dtype:
translation:
languages:
- fr
- ro
splits:
- name: train
num_bytes: 133973522
num_examples: 387846
download_size: 169044065
dataset_size: 133973522
- config_name: fr-sk
features:
- name: translation
dtype:
translation:
languages:
- fr
- sk
splits:
- name: train
num_bytes: 207736993
num_examples: 631846
download_size: 186425028
dataset_size: 207736993
- config_name: fr-sl
features:
- name: translation
dtype:
translation:
languages:
- fr
- sl
splits:
- name: train
num_bytes: 190523805
num_examples: 606897
download_size: 181374508
dataset_size: 190523805
- config_name: fr-sv
features:
- name: translation
dtype:
translation:
languages:
- fr
- sv
splits:
- name: train
num_bytes: 623443554
num_examples: 1880390
download_size: 275743717
dataset_size: 623443554
- config_name: hu-it
features:
- name: translation
dtype:
translation:
languages:
- hu
- it
splits:
- name: train
num_bytes: 207768447
num_examples: 589563
download_size: 181442707
dataset_size: 207768447
- config_name: hu-lt
features:
- name: translation
dtype:
translation:
languages:
- hu
- lt
splits:
- name: train
num_bytes: 195366291
num_examples: 610298
download_size: 88456570
dataset_size: 195366291
- config_name: hu-lv
features:
- name: translation
dtype:
translation:
languages:
- hu
- lv
splits:
- name: train
num_bytes: 200475742
num_examples: 621101
download_size: 88300472
dataset_size: 200475742
- config_name: hu-nl
features:
- name: translation
dtype:
translation:
languages:
- hu
- nl
splits:
- name: train
num_bytes: 205617797
num_examples: 605806
download_size: 184560090
dataset_size: 205617797
- config_name: hu-pl
features:
- name: translation
dtype:
translation:
languages:
- hu
- pl
splits:
- name: train
num_bytes: 204095081
num_examples: 621820
download_size: 91932370
dataset_size: 204095081
- config_name: hu-pt
features:
- name: translation
dtype:
translation:
languages:
- hu
- pt
splits:
- name: train
num_bytes: 204293487
num_examples: 599639
download_size: 184009255
dataset_size: 204293487
- config_name: hu-ro
features:
- name: translation
dtype:
translation:
languages:
- hu
- ro
splits:
- name: train
num_bytes: 129428826
num_examples: 377239
download_size: 73491360
dataset_size: 129428826
- config_name: hu-sk
features:
- name: translation
dtype:
translation:
languages:
- hu
- sk
splits:
- name: train
num_bytes: 201934745
num_examples: 618247
download_size: 90886028
dataset_size: 201934745
- config_name: hu-sl
features:
- name: translation
dtype:
translation:
languages:
- hu
- sl
splits:
- name: train
num_bytes: 187295201
num_examples: 601671
download_size: 85848963
dataset_size: 187295201
- config_name: hu-sv
features:
- name: translation
dtype:
translation:
languages:
- hu
- sv
splits:
- name: train
num_bytes: 201010172
num_examples: 631872
download_size: 173806423
dataset_size: 201010172
- config_name: it-lt
features:
- name: translation
dtype:
translation:
languages:
- it
- lt
splits:
- name: train
num_bytes: 194730310
num_examples: 593003
download_size: 178347064
dataset_size: 194730310
- config_name: it-lv
features:
- name: translation
dtype:
translation:
languages:
- it
- lv
splits:
- name: train
num_bytes: 200106637
num_examples: 599394
download_size: 178242433
dataset_size: 200106637
- config_name: it-nl
features:
- name: translation
dtype:
translation:
languages:
- it
- nl
splits:
- name: train
num_bytes: 667554644
num_examples: 1919855
download_size: 281535603
dataset_size: 667554644
- config_name: it-pl
features:
- name: translation
dtype:
translation:
languages:
- it
- pl
splits:
- name: train
num_bytes: 204343831
num_examples: 594472
download_size: 181869443
dataset_size: 204343831
- config_name: it-pt
features:
- name: translation
dtype:
translation:
languages:
- it
- pt
splits:
- name: train
num_bytes: 662888825
num_examples: 1877432
download_size: 280344907
dataset_size: 662888825
- config_name: it-ro
features:
- name: translation
dtype:
translation:
languages:
- it
- ro
splits:
- name: train
num_bytes: 130259763
num_examples: 367904
download_size: 163411428
dataset_size: 130259763
- config_name: it-sk
features:
- name: translation
dtype:
translation:
languages:
- it
- sk
splits:
- name: train
num_bytes: 201935420
num_examples: 603467
download_size: 180786705
dataset_size: 201935420
- config_name: it-sl
features:
- name: translation
dtype:
translation:
languages:
- it
- sl
splits:
- name: train
num_bytes: 184859642
num_examples: 579968
download_size: 175764011
dataset_size: 184859642
- config_name: it-sv
features:
- name: translation
dtype:
translation:
languages:
- it
- sv
splits:
- name: train
num_bytes: 596242670
num_examples: 1766096
download_size: 269861070
dataset_size: 596242670
- config_name: lt-lv
features:
- name: translation
dtype:
translation:
languages:
- lt
- lv
splits:
- name: train
num_bytes: 188060955
num_examples: 621857
download_size: 85277601
dataset_size: 188060955
- config_name: lt-nl
features:
- name: translation
dtype:
translation:
languages:
- lt
- nl
splits:
- name: train
num_bytes: 193749342
num_examples: 613308
download_size: 181477191
dataset_size: 193749342
- config_name: lt-pl
features:
- name: translation
dtype:
translation:
languages:
- lt
- pl
splits:
- name: train
num_bytes: 191712803
num_examples: 617296
download_size: 88896956
dataset_size: 191712803
- config_name: lt-pt
features:
- name: translation
dtype:
translation:
languages:
- lt
- pt
splits:
- name: train
num_bytes: 191496681
num_examples: 603223
download_size: 180925582
dataset_size: 191496681
- config_name: lt-ro
features:
- name: translation
dtype:
translation:
languages:
- lt
- ro
splits:
- name: train
num_bytes: 122958316
num_examples: 384679
download_size: 70386543
dataset_size: 122958316
- config_name: lt-sk
features:
- name: translation
dtype:
translation:
languages:
- lt
- sk
splits:
- name: train
num_bytes: 189101772
num_examples: 622997
download_size: 87817035
dataset_size: 189101772
- config_name: lt-sl
features:
- name: translation
dtype:
translation:
languages:
- lt
- sl
splits:
- name: train
num_bytes: 173710681
num_examples: 602442
download_size: 82776077
dataset_size: 173710681
- config_name: lt-sv
features:
- name: translation
dtype:
translation:
languages:
- lt
- sv
splits:
- name: train
num_bytes: 188733924
num_examples: 628817
download_size: 170761964
dataset_size: 188733924
- config_name: lv-nl
features:
- name: translation
dtype:
translation:
languages:
- lv
- nl
splits:
- name: train
num_bytes: 198965150
num_examples: 618352
download_size: 181381125
dataset_size: 198965150
- config_name: lv-pl
features:
- name: translation
dtype:
translation:
languages:
- lv
- pl
splits:
- name: train
num_bytes: 198845485
num_examples: 638453
download_size: 88758761
dataset_size: 198845485
- config_name: lv-pt
features:
- name: translation
dtype:
translation:
languages:
- lv
- pt
splits:
- name: train
num_bytes: 198412113
num_examples: 615580
download_size: 180801629
dataset_size: 198412113
- config_name: lv-ro
features:
- name: translation
dtype:
translation:
languages:
- lv
- ro
splits:
- name: train
num_bytes: 127087848
num_examples: 390857
download_size: 70314589
dataset_size: 127087848
- config_name: lv-sk
features:
- name: translation
dtype:
translation:
languages:
- lv
- sk
splits:
- name: train
num_bytes: 194466502
num_examples: 629803
download_size: 87693678
dataset_size: 194466502
- config_name: lv-sl
features:
- name: translation
dtype:
translation:
languages:
- lv
- sl
splits:
- name: train
num_bytes: 178009999
num_examples: 607381
download_size: 82594307
dataset_size: 178009999
- config_name: lv-sv
features:
- name: translation
dtype:
translation:
languages:
- lv
- sv
splits:
- name: train
num_bytes: 194010201
num_examples: 643600
download_size: 170626197
dataset_size: 194010201
- config_name: nl-pl
features:
- name: translation
dtype:
translation:
languages:
- nl
- pl
splits:
- name: train
num_bytes: 202577192
num_examples: 612797
download_size: 185014758
dataset_size: 202577192
- config_name: nl-pt
features:
- name: translation
dtype:
translation:
languages:
- nl
- pt
splits:
- name: train
num_bytes: 666335238
num_examples: 1957189
download_size: 284348205
dataset_size: 666335238
- config_name: nl-ro
features:
- name: translation
dtype:
translation:
languages:
- nl
- ro
splits:
- name: train
num_bytes: 129250903
num_examples: 380736
download_size: 166521373
dataset_size: 129250903
- config_name: nl-sk
features:
- name: translation
dtype:
translation:
languages:
- nl
- sk
splits:
- name: train
num_bytes: 200169118
num_examples: 622650
download_size: 183925381
dataset_size: 200169118
- config_name: nl-sl
features:
- name: translation
dtype:
translation:
languages:
- nl
- sl
splits:
- name: train
num_bytes: 184588246
num_examples: 600023
download_size: 178917463
dataset_size: 184588246
- config_name: nl-sv
features:
- name: translation
dtype:
translation:
languages:
- nl
- sv
splits:
- name: train
num_bytes: 600924875
num_examples: 1870685
download_size: 273628695
dataset_size: 600924875
- config_name: pl-pt
features:
- name: translation
dtype:
translation:
languages:
- pl
- pt
splits:
- name: train
num_bytes: 202077773
num_examples: 608181
download_size: 184478728
dataset_size: 202077773
- config_name: pl-ro
features:
- name: translation
dtype:
translation:
languages:
- pl
- ro
splits:
- name: train
num_bytes: 130211235
num_examples: 389341
download_size: 73935732
dataset_size: 130211235
- config_name: pl-sk
features:
- name: translation
dtype:
translation:
languages:
- pl
- sk
splits:
- name: train
num_bytes: 198571926
num_examples: 624330
download_size: 91348753
dataset_size: 198571926
- config_name: pl-sl
features:
- name: translation
dtype:
translation:
languages:
- pl
- sl
splits:
- name: train
num_bytes: 182038291
num_examples: 600511
download_size: 86313727
dataset_size: 182038291
- config_name: pl-sv
features:
- name: translation
dtype:
translation:
languages:
- pl
- sv
splits:
- name: train
num_bytes: 197987693
num_examples: 657951
download_size: 174170909
dataset_size: 197987693
- config_name: pt-ro
features:
- name: translation
dtype:
translation:
languages:
- pt
- ro
splits:
- name: train
num_bytes: 128921939
num_examples: 381404
download_size: 165965899
dataset_size: 128921939
- config_name: pt-sk
features:
- name: translation
dtype:
translation:
languages:
- pt
- sk
splits:
- name: train
num_bytes: 197887183
num_examples: 611895
download_size: 183332222
dataset_size: 197887183
- config_name: pt-sl
features:
- name: translation
dtype:
translation:
languages:
- pt
- sl
splits:
- name: train
num_bytes: 182608021
num_examples: 593455
download_size: 178188570
dataset_size: 182608021
- config_name: pt-sv
features:
- name: translation
dtype:
translation:
languages:
- pt
- sv
splits:
- name: train
num_bytes: 598677198
num_examples: 1823402
download_size: 272500072
dataset_size: 598677198
- config_name: ro-sk
features:
- name: translation
dtype:
translation:
languages:
- ro
- sk
splits:
- name: train
num_bytes: 125917165
num_examples: 387839
download_size: 72817194
dataset_size: 125917165
- config_name: ro-sl
features:
- name: translation
dtype:
translation:
languages:
- ro
- sl
splits:
- name: train
num_bytes: 116060031
num_examples: 374859
download_size: 67766532
dataset_size: 116060031
- config_name: ro-sv
features:
- name: translation
dtype:
translation:
languages:
- ro
- sv
splits:
- name: train
num_bytes: 126359961
num_examples: 390133
download_size: 155757942
dataset_size: 126359961
- config_name: sk-sl
features:
- name: translation
dtype:
translation:
languages:
- sk
- sl
splits:
- name: train
num_bytes: 179514252
num_examples: 609698
download_size: 85175048
dataset_size: 179514252
- config_name: sk-sv
features:
- name: translation
dtype:
translation:
languages:
- sk
- sv
splits:
- name: train
num_bytes: 195200876
num_examples: 636353
download_size: 173202439
dataset_size: 195200876
- config_name: sl-sv
features:
- name: translation
dtype:
translation:
languages:
- sl
- sv
splits:
- name: train
num_bytes: 178446367
num_examples: 608740
download_size: 168196323
dataset_size: 178446367
---
# Dataset Card for europarl-bilingual
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Statmt](http://www.statmt.org/europarl/)
- **Repository:** [OPUS Europarl](https://opus.nlpl.eu/Europarl.php)
- **Paper:** [Aclweb](https://www.aclweb.org/anthology/L12-1246/)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
A parallel corpus extracted from the European Parliament web site by Philipp Koehn (University of Edinburgh). The main intended use is to aid statistical machine translation research.
To load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.
You can find the valid pairs in Homepage section of Dataset Description: https://opus.nlpl.eu/Europarl.php
E.g.
`dataset = load_dataset("europarl_bilingual", lang1="fi", lang2="fr")`
### Supported Tasks and Leaderboards
Tasks: Machine Translation, Cross Lingual Word Embeddings (CWLE) Alignment
### Languages
- 21 languages, 211 bitexts
- total number of files: 207,775
- total number of tokens: 759.05M
- total number of sentence fragments: 30.32M
Every pair of the following languages is available:
- bg
- cs
- da
- de
- el
- en
- es
- et
- fi
- fr
- hu
- it
- lt
- lv
- nl
- pl
- pt
- ro
- sk
- sl
- sv
## Dataset Structure
### Data Instances
Here is an example from the en-fr pair:
```
{
'translation': {
'en': 'Resumption of the session',
'fr': 'Reprise de la session'
}
}
```
### Data Fields
- `translation`: a dictionary containing two strings paired with a key indicating the corresponding language.
### Data Splits
- `train`: only train split is provided. Authors did not provide a separation of examples in `train`, `dev` and `test`.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The data set comes with the same license
as the original sources.
Please, check the information about the source
that is given on
http://opus.nlpl.eu/Europarl-v8.php
### Citation Information
```
@InProceedings{TIEDEMANN12.463,
author = {J�rg Tiedemann},
title = {Parallel Data, Tools and Interfaces in OPUS},
booktitle = {Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)},
year = {2012},
month = {may},
date = {23-25},
address = {Istanbul, Turkey},
editor = {Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis},
publisher = {European Language Resources Association (ELRA)},
isbn = {978-2-9517408-7-7},
language = {english}
}
```
### Contributions
Thanks to [@lucadiliello](https://github.com/lucadiliello) for adding this dataset. |
Lucylulu/imdb | 2021-12-08T20:20:37.000Z | [
"region:us"
] | Lucylulu | null | null | null | 0 | 57 | Entry not found |
ccdv/patent-classification | 2022-10-22T09:25:36.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:topic-classification",
"size_categories:10K<n<100K",
"language:en",
"long context",
"region:us"
] | ccdv | Patent Classification Dataset: a classification of Patents (9 classes).
It contains 9 unbalanced classes, 35k Patents and summaries divided into 3 splits: train (25k), val (5k) and test (5k).
Data are sampled from "BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization." by Eva Sharma, Chen Li and Lu Wang
See: https://aclanthology.org/P19-1212.pdf
See: https://evasharma.github.io/bigpatent/ | null | null | 4 | 57 | ---
language: en
task_categories:
- text-classification
tags:
- long context
task_ids:
- multi-class-classification
- topic-classification
size_categories: 10K<n<100K
---
**Patent Classification: a classification of Patents and abstracts (9 classes).**
This dataset is intended for long context classification (non abstract documents are longer that 512 tokens). \
Data are sampled from "BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization." by Eva Sharma, Chen Li and Lu Wang
* See: https://aclanthology.org/P19-1212.pdf
* See: https://evasharma.github.io/bigpatent/
It contains 9 unbalanced classes, 35k Patents and abstracts divided into 3 splits: train (25k), val (5k) and test (5k).
**Note that documents are uncased and space separated (by authors)**
Compatible with [run_glue.py](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) script:
```
export MODEL_NAME=roberta-base
export MAX_SEQ_LENGTH=512
python run_glue.py \
--model_name_or_path $MODEL_NAME \
--dataset_name ccdv/patent-classification \
--do_train \
--do_eval \
--max_seq_length $MAX_SEQ_LENGTH \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 4 \
--learning_rate 2e-5 \
--num_train_epochs 1 \
--max_eval_samples 500 \
--output_dir tmp/patent
``` |
openclimatefix/nimrod-uk-1km | 2022-06-08T14:49:03.000Z | [
"region:us"
] | openclimatefix | This dataset contains UK Nimrod rainfall radar data for 2016-2019 as used in the Skillful Precipitation Nowcasting Using Deep Generative Model of Radar paper by DeepMind. | @article{ravuris2021skillful,
author={Suman Ravuri and Karel Lenc and Matthew Willson and Dmitry Kangin and Remi Lam and Piotr Mirowski and Megan Fitzsimons and Maria Athanassiadou and Sheleem Kashem and Sam Madge and Rachel Prudden Amol Mandhane and Aidan Clark and Andrew Brock and Karen Simonyan and Raia Hadsell and Niall Robinson Ellen Clancy and Alberto Arribas† and Shakir Mohamed},
title={Skillful Precipitation Nowcasting using Deep Generative Models of Radar},
journal={Nature},
volume={597},
pages={672--677},
year={2021}
} | null | 7 | 57 | [Needs More Information]
# Dataset Card for UK Nimrod 1km Rainfall Radar Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/deepmind/deepmind-research/tree/master/nowcasting
- **Repository:** https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km
- **Paper:** [Skillful Precipitation Nowcasting using Deep Generative Models of Radar, Ravuri et al. 2021](https://www.nature.com/articles/s41586-021-03854-z)
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Jacob Bieker](mailto:jacob@openclimatefix.org)
### Dataset Summary
This dataset contains UK Nimrod rainfall radar data for 2016-2019 as used in the Skillful Precipitation Nowcasting Using Deep Generative Model of Radar paper by DeepMind. This dataset is an unofficial mirror of the open sourced dataset available here: gs://dm-nowcasting/datasets/nowcasting_open_source_osgb/nimrod_osgb_1000m_yearly_splits/radar/20200718
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
[Needs More Information]
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
The train data is all days except the first of each month for 2016-2018. The validation is the first of every month for 2016-2018. The test data is all of 2019.
## Dataset Creation
### Curation Rationale
This dataset was originally created for training a generative model for forecasting rainfall percipitation.
### Source Data
#### Initial Data Collection and Normalization
DeepMind initially collected the data from the UK Met Office and post processed it into this dataset.
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
The provided post-processed nowcasting dataset is licensed under a Creative Commons Attribution 4.0 International License and it contains public sector information licensed by the Met Office under the Open Government Licence v3.0.
### Citation Information
Cite DeepMind, and the authors of [Skillful Precipitation Nowcasting using Deep Generative Models of Radar, Ravuri et al. 2021](https://www.nature.com/articles/s41586-021-03854-z). |
codyburker/yelp_review_sampled | 2022-03-05T17:29:30.000Z | [
"region:us"
] | codyburker | null | null | null | 0 | 57 | Entry not found |
bongsoo/social_science_en_ko | 2022-10-05T00:09:30.000Z | [
"language:ko",
"license:apache-2.0",
"region:us"
] | bongsoo | null | null | null | 0 | 57 | ---
language:
- ko
license: apache-2.0
---
- 사회과학-en-ko 번역 말뭉치
|
ai4bharat/IndicXParaphrase | 2022-10-14T16:40:28.000Z | [
"region:us"
] | ai4bharat | \ | \ | null | 0 | 57 | Entry not found |
lucasmccabe/logiqa | 2023-02-08T01:51:31.000Z | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | lucasmccabe | LogiQA is constructed from the logical comprehension problems from publically available questions of the National Civil Servants Examination of China, which are designed to test the civil servant candidates’ critical thinking and problem solving. This dataset includes the English versions only; the Chinese versions are available via the homepage/original source. | @article{liu2020logiqa,
title={Logiqa: A challenge dataset for machine reading comprehension with logical reasoning},
author={Liu, Jian and Cui, Leyang and Liu, Hanmeng and Huang, Dandan and Wang, Yile and Zhang, Yue},
journal={arXiv preprint arXiv:2007.08124},
year={2020}
} | null | 3 | 57 | ---
task_categories:
- question-answering
language:
- en
pretty_name: LogiQA
size_categories:
- 1K<n<10K
paperswithcode_id: logiqa
dataset_info:
features:
- name: context
dtype: string
- name: query
dtype: string
- name: options
sequence:
dtype: string
- name: correct_option
dtype: string
splits:
- name: train
num_examples: 7376
- name: validation
num_examples: 651
- name: test
num_examples: 651
---
# Dataset Card for LogiQA
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
LogiQA is constructed from the logical comprehension problems from publically available questions of the National Civil Servants Examination of China, which are designed to test the civil servant candidates’ critical thinking and problem solving. This dataset includes the English versions only; the Chinese versions are available via the homepage/original source.
## Dataset Structure
### Data Instances
An example from `train` looks as follows:
```
{'context': 'Continuous exposure to indoor fluorescent lights is beneficial to the health of hamsters with heart disease. One group of hamsters exposed to continuous exposure to fluorescent lights has an average lifespan that is 2.5% longer than another one of the same species but living in a black wall.',
'query': 'Which of the following questions was the initial motivation for conducting the above experiment?',
'options': ['Can hospital light therapy be proved to promote patient recovery?',
'Which one lives longer, the hamster living under the light or the hamster living in the dark?',
'What kind of illness does the hamster have?',
'Do some hamsters need a period of darkness?'],
'correct_option': 0}
```
### Data Fields
- `context`: a `string` feature.
- `query`: a `string` feature.
- `answers`: a `list` feature containing `string` features.
- `correct_option`: a `string` feature.
### Data Splits
|train|validation|test|
|----:|---------:|---:|
| 7376| 651| 651|
## Additional Information
### Dataset Curators
The original LogiQA was produced by Jian Liu, Leyang Cui , Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang.
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{liu2020logiqa,
title={Logiqa: A challenge dataset for machine reading comprehension with logical reasoning},
author={Liu, Jian and Cui, Leyang and Liu, Hanmeng and Huang, Dandan and Wang, Yile and Zhang, Yue},
journal={arXiv preprint arXiv:2007.08124},
year={2020}
}
```
### Contributions
[@lucasmccabe](https://github.com/lucasmccabe) added this dataset. |
MultiversexPeeps/AI_Faces | 2023-02-26T10:30:56.000Z | [
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:creativeml-openrail-m",
"AI Faces",
"Photography",
"Stable Diffusion",
"region:us"
] | MultiversexPeeps | null | null | null | 2 | 57 | ---
license: creativeml-openrail-m
task_categories:
- text-to-image
language:
- en
tags:
- AI Faces
- Photography
- Stable Diffusion
pretty_name: AI Generated Faces
size_categories:
- 1K<n<10K
--- |
Shavindra/satellite-512 | 2023-03-05T14:59:41.000Z | [
"region:us"
] | Shavindra | null | null | null | 0 | 57 | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: image
- name: pixel_values
dtype: image
splits:
- name: train
num_bytes: 332055030.0
num_examples: 304
download_size: 0
dataset_size: 332055030.0
---
# Dataset Card for "satellite-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
amaydle/npc-dialogue | 2023-03-25T09:11:29.000Z | [
"region:us"
] | amaydle | null | null | null | 4 | 57 | ---
dataset_info:
features:
- name: Name
dtype: string
- name: Biography
dtype: string
- name: Query
dtype: string
- name: Response
dtype: string
- name: Emotion
dtype: string
splits:
- name: train
num_bytes: 737058.9117493472
num_examples: 1723
- name: test
num_bytes: 82133.08825065274
num_examples: 192
download_size: 201559
dataset_size: 819192.0
---
# Dataset Card for "npc-dialogue"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
arbml/tashkeela | 2023-04-06T19:09:05.000Z | [
"region:us"
] | arbml | null | null | null | 2 | 57 | ---
dataset_info:
features:
- name: diacratized
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1419585102
num_examples: 979982
- name: test
num_bytes: 78869542
num_examples: 54444
- name: dev
num_bytes: 78863352
num_examples: 54443
download_size: 747280703
dataset_size: 1577317996
---
# Dataset Card for "tashkeela"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
atasoglu/flickr8k-dataset | 2023-05-28T12:12:43.000Z | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | atasoglu | null | null | null | 0 | 57 | ---
task_categories:
- image-to-text
- text-to-image
language:
- en
pretty_name: flickr8k
size_categories:
- 1K<n<10K
---
You must download the dataset files manually. You can visit [this](https://github.com/jbrownlee/Datasets/releases/tag/Flickr8k) page or run `download.sh` to get files.
After, you can load dataset by referencing the directory:
```py
import datasets
ds = datasets.load_dataset("atasoglu/flickr8k-dataset", data_dir="data")
print(ds)
```
```
DatasetDict({
train: Dataset({
features: ['image_id', 'image_path', 'captions'],
num_rows: 6000
})
test: Dataset({
features: ['image_id', 'image_path', 'captions'],
num_rows: 1000
})
validation: Dataset({
features: ['image_id', 'image_path', 'captions'],
num_rows: 1000
})
})
```
I don't own the copyright of the images. Please [visit](https://forms.illinois.edu/sec/1713398) for more. |
nlpai-lab/openassistant-guanaco-ko | 2023-06-01T10:44:35.000Z | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_categories:summarization",
"size_categories:1K<n<10K",
"language:ko",
"license:apache-2.0",
"region:us"
] | nlpai-lab | null | null | null | 1 | 57 | ---
license: apache-2.0
task_categories:
- text-generation
- question-answering
- summarization
language:
- ko
size_categories:
- 1K<n<10K
---
### Dataset Summary
Korean translation of Guanaco via the DeepL API
Note: There are cases where multilingual data has been converted to monolingual data during batch translation to Korean using the API.
Below is Guanaco's README.
----
This dataset is a subset of the Open Assistant dataset, which you can find here: https://huggingface.co/datasets/OpenAssistant/oasst1/tree/main
This subset of the data only contains the highest-rated paths in the conversation tree, with a total of 9,846 samples.
This dataset was used to train Guanaco with QLoRA.
For further information, please see the original dataset.
License: Apache 2.0 |
cdminix/libritts-r-aligned | 2023-07-02T15:13:39.000Z | [
"task_categories:automatic-speech-recognition",
"task_categories:text-to-speech",
"annotations_creators:crowdsourced",
"language:en",
"license:cc-by-4.0",
"speech",
"audio",
"automatic-speech-recognition",
"text-to-speech",
"arxiv:1904.02882",
"arxiv:2211.16049",
"region:us"
] | cdminix | Dataset used for loading TTS spectrograms and waveform audio with alignments and a number of configurable "measures", which are extracted from the raw audio. | @article{koizumi2023libritts,
title={LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus},
author={Koizumi, Yuma and Zen, Heiga and Karita, Shigeki and Ding, Yifan and Yatabe, Kohei and Morioka, Nobuyuki and Bacchiani, Michiel and Zhang, Yu and Han, Wei and Bapna, Ankur},
journal={arXiv preprint arXiv:2305.18802},
year={2023}
}
@article{zen2019libritts,
title={LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech},
author={Zen, Heiga and Dang, Viet and Clark, Rob and Zhang, Yu and Weiss, Ron J and Jia, Ye and Chen, Zhifeng and Wu, Yonghui},
journal={Interspeech},
year={2019}
}
@article{https://doi.org/10.48550/arxiv.2211.16049,
author = {Minixhofer, Christoph and Klejch, Ondřej and Bell, Peter},
title = {Evaluating and reducing the distance between synthetic and real speech distributions},
year = {2022}
} | null | 4 | 57 | ---
pretty_name: LibriTTS Corpus with Forced Alignments
annotations_creators:
- crowdsourced
language: en
tags:
- speech
- audio
- automatic-speech-recognition
- text-to-speech
license:
- cc-by-4.0
task_categories:
- automatic-speech-recognition
- text-to-speech
extra_gated_prompt: "When using this dataset to download LibriTTS, you agree to the terms on https://www.openslr.org"
---
> This dataset is identical to **[cdminix/libritts-aligned](https://huggingface.co/datasets/cdminix/libritts-aligned)** except it uses the newly released LibriTTS-R corpus. Please cite **[Y. Koizumi, et al., "LibriTTS-R: Restoration of a Large-Scale Multi-Speaker TTS Corpus", Interspeech 2023](https://google.github.io/df-conformer/librittsr/)**
*When using this dataset to download LibriTTS-R, make sure you agree to the terms on https://www.openslr.org*
# Dataset Card for LibriTTS-R with Forced Alignments (and Measures)
This dataset downloads LibriTTS-R and preprocesses it on your machine to create alignments using [montreal forced aligner](https://montreal-forced-aligner.readthedocs.io/en/latest/).
You need to run ``pip install alignments phones`` before using this dataset.
When running this the first time, it can take an hour or two, but subsequent runs will be lightning fast.
## Requirements
- ``pip install alignments phones`` **(required)**
- ``pip install speech-collator`` (optional)
*Note: version >=0.0.15 of alignments is required for this corpus*
## Example Item
```json
{
'id': '100_122655_000073_000002.wav',
'speaker': '100',
'text': 'the day after, diana and mary quitted it for distant b.',
'start': 0.0,
'end': 3.6500000953674316,
'phones': ['[SILENCE]', 'ð', 'ʌ', '[SILENCE]', 'd', 'eɪ', '[SILENCE]', 'æ', 'f', 't', 'ɜ˞', '[COMMA]', 'd', 'aɪ', 'æ', 'n', 'ʌ', '[SILENCE]', 'æ', 'n', 'd', '[SILENCE]', 'm', 'ɛ', 'ɹ', 'i', '[SILENCE]', 'k', 'w', 'ɪ', 't', 'ɪ', 'd', '[SILENCE]', 'ɪ', 't', '[SILENCE]', 'f', 'ɜ˞', '[SILENCE]', 'd', 'ɪ', 's', 't', 'ʌ', 'n', 't', '[SILENCE]', 'b', 'i', '[FULL STOP]'],
'phone_durations': [5, 2, 4, 0, 5, 13, 0, 16, 7, 5, 20, 2, 6, 9, 15, 4, 2, 0, 11, 3, 5, 0, 3, 8, 9, 8, 0, 13, 3, 5, 3, 6, 4, 0, 8, 5, 0, 9, 5, 0, 7, 5, 6, 7, 4, 5, 10, 0, 3, 35, 9],
'audio': '/dev/shm/metts/train-clean-360-alignments/100/100_122655_000073_000002.wav'
}
```
The phones are IPA phones, and the phone durations are in frames (assuming a hop length of 256, sample rate of 22050 and window length of 1024). These attributes can be changed using the ``hop_length``, ``sample_rate`` and ``window_length`` arguments to ``LibriTTSAlign``.
## Data Collator
This dataset comes with a data collator which can be used to create batches of data for training.
It can be installed using ``pip install speech-collator`` ([MiniXC/speech-collator](https://www.github.com/MiniXC/speech-collator)) and can be used as follows:
```python
import json
from datasets import load_dataset
from speech_collator import SpeechCollator
from torch.utils.data import DataLoader
dataset = load_dataset('cdminix/libritts-aligned', split="train")
speaker2ixd = json.load(open("speaker2idx.json"))
phone2ixd = json.load(open("phone2idx.json"))
collator = SpeechCollator(
speaker2ixd=speaker2idx,
phone2ixd=phone2idx ,
)
dataloader = DataLoader(dataset, collate_fn=collator.collate_fn, batch_size=8)
```
You can either download the ``speaker2idx.json`` and ``phone2idx.json`` files from [here](https://huggingface.co/datasets/cdminix/libritts-aligned/tree/main/data) or create them yourself using the following code:
```python
import json
from datasets import load_dataset
from speech_collator import SpeechCollator, create_speaker2idx, create_phone2idx
dataset = load_dataset("cdminix/libritts-aligned", split="train")
# Create speaker2idx and phone2idx
speaker2idx = create_speaker2idx(dataset, unk_idx=0)
phone2idx = create_phone2idx(dataset, unk_idx=0)
# save to json
with open("speaker2idx.json", "w") as f:
json.dump(speaker2idx, f)
with open("phone2idx.json", "w") as f:
json.dump(phone2idx, f)
```
### Measures
When using ``speech-collator`` you can also use the ``measures`` argument to specify which measures to use. The following example extracts Pitch and Energy on the fly.
```python
import json
from torch.utils.data import DataLoader
from datasets import load_dataset
from speech_collator import SpeechCollator, create_speaker2idx, create_phone2idx
from speech_collator.measures import PitchMeasure, EnergyMeasure
dataset = load_dataset("cdminix/libritts-aligned", split="train")
speaker2idx = json.load(open("data/speaker2idx.json"))
phone2idx = json.load(open("data/phone2idx.json"))
# Create SpeechCollator
speech_collator = SpeechCollator(
speaker2idx=speaker2idx,
phone2idx=phone2idx,
measures=[PitchMeasure(), EnergyMeasure()],
return_keys=["measures"]
)
# Create DataLoader
dataloader = DataLoader(
dataset,
batch_size=8,
collate_fn=speech_collator.collate_fn,
)
```
COMING SOON: Detailed documentation on how to use the measures at [MiniXC/speech-collator](https://www.github.com/MiniXC/speech-collator).
## Splits
This dataset has the following splits:
- ``train``: All the training data, except one sample per speaker which is used for validation.
- ``dev``: The validation data, one sample per speaker.
- ``train.clean.100``: Training set derived from the original materials of the train-clean-100 subset of LibriSpeech.
- ``train.clean.360``: Training set derived from the original materials of the train-clean-360 subset of LibriSpeech.
- ``train.other.500``: Training set derived from the original materials of the train-other-500 subset of LibriSpeech.
- ``dev.clean``: Validation set derived from the original materials of the dev-clean subset of LibriSpeech.
- ``dev.other``: Validation set derived from the original materials of the dev-other subset of LibriSpeech.
- ``test.clean``: Test set derived from the original materials of the test-clean subset of LibriSpeech.
- ``test.other``: Test set derived from the original materials of the test-other subset of LibriSpeech.
## Environment Variables
There are a few environment variable which can be set.
- ``LIBRITTS_VERBOSE``: If set, will print out more information about the dataset creation process.
- ``LIBRITTS_MAX_WORKERS``: The number of workers to use when creating the alignments. Defaults to ``cpu_count()``.
- ``LIBRITTS_PATH``: The path to download LibriTTS to. Defaults to the value of ``HF_DATASETS_CACHE``.
# Citation
When using LibriTTS-R please cite the following papers:
- [LibriTTS-R: Restoration of a Large-Scale Multi-Speaker TTS Corpus](https://google.github.io/df-conformer/librittsr/)
- [LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech](https://arxiv.org/abs/1904.02882)
- [Montreal Forced Aligner: Trainable text-speech alignment using Kaldi](https://www.researchgate.net/publication/319185277_Montreal_Forced_Aligner_Trainable_Text-Speech_Alignment_Using_Kaldi)
When using the Measures please cite the following paper (ours):
- [Evaluating and reducing the distance between synthetic and real speech distributions](https://arxiv.org/abs/2211.16049) |
rookshanks/gsm8k | 2023-06-21T22:55:22.000Z | [
"region:us"
] | rookshanks | null | null | null | 0 | 57 | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 3566510.564699585
num_examples: 6725
- name: test
num_bytes: 713732
num_examples: 1319
- name: validation
num_bytes: 396691.4353004148
num_examples: 748
download_size: 2306142
dataset_size: 4676933.999999999
---
# Dataset Card for "gsm8k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ChrisHayduk/Llama-2-SQL-and-Code-Dataset | 2023-09-29T04:18:17.000Z | [
"region:us"
] | ChrisHayduk | null | null | null | 3 | 57 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: table
dtype: string
splits:
- name: train
num_bytes: 46640417
num_examples: 128351
- name: eval
num_bytes: 1756894
num_examples: 1302
download_size: 18298063
dataset_size: 48397311
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
---
# Dataset Card for "Llama-2-SQL-and-Code-Dataset"
This dataset is intended to provide LLaMA 2 improved coding and instruction following capabilities, with a specific focus on SQL generation.
The dataset is in Alpaca Instruct format. Please be sure to provide the instruction and input in the prompt to the model, along with any prompt text you would like to place around those inputs.
In the train split, please ignore the table column. The eval split provides example tables so that the actual executable SQL performance can be compared on a number of SQL generation tasks.
To use the tables, they can be loaded as JSON objects and passed to a SQL execution tool such as sqlglot. |
AtheerAlgherairy/DST_Multiwoz21_instruction_Tuning | 2023-09-13T09:35:38.000Z | [
"region:us"
] | AtheerAlgherairy | null | null | null | 0 | 57 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: context
dtype: string
- name: Answer
dtype: string
- name: Text
dtype: string
- name: Question
dtype: string
- name: new_text
dtype: string
- name: final_col
dtype: string
- name: prompt_only
dtype: string
- name: json_answer
dtype: string
- name: json_prompt
dtype: string
- name: chat_prompt_no_ans
dtype: string
- name: chat_prompt
dtype: string
splits:
- name: train
num_bytes: 1597274291
num_examples: 56778
- name: validation
num_bytes: 210075906
num_examples: 7374
- name: test
num_bytes: 210458457
num_examples: 7372
download_size: 212784799
dataset_size: 2017808654
---
# Dataset Card for "DST_Multiwoz21_instruction_tuning"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dim/lurk_prompts | 2023-09-01T20:56:40.000Z | [
"region:us"
] | dim | null | null | null | 0 | 57 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: text
dtype: string
- name: word
dtype: string
splits:
- name: train
num_bytes: 92132815
num_examples: 5671
download_size: 49004405
dataset_size: 92132815
---
# Dataset Card for "lurk_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
elmadany/dummy-text | 2023-09-25T21:32:56.000Z | [
"region:us"
] | elmadany | An open-source replication of the WebText dataset from OpenAI. | Dummy text | null | 0 | 57 | Entry not found |
tyzhu/squad_title_v4_train_30_eval_10 | 2023-09-26T09:49:20.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 57 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 555104
num_examples: 368
- name: validation
num_bytes: 50807
num_examples: 50
download_size: 105632
dataset_size: 605911
---
# Dataset Card for "squad_title_v4_train_30_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chansurgeplus/oasst1-guanaco-damo-convai-pro | 2023-09-27T11:46:45.000Z | [
"region:us"
] | chansurgeplus | null | null | null | 0 | 57 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prefix
sequence: string
- name: suffix
sequence: string
- name: reward
sequence: float64
- name: sft_index
dtype: int64
splits:
- name: train
num_bytes: 87033673
num_examples: 9851
- name: test
num_bytes: 1276935
num_examples: 1032
download_size: 16272549
dataset_size: 88310608
---
# Dataset Card for "oasst1-guanaco-damo-convai-pro"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
chats-bug/email_subject_gen | 2023-10-05T11:52:14.000Z | [
"region:us"
] | chats-bug | null | null | null | 0 | 57 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: prompt
dtype: string
- name: subject_line
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 33264969.9304227
num_examples: 59489
- name: test
num_bytes: 1751347.0695772984
num_examples: 3132
download_size: 10335744
dataset_size: 35016317.0
---
# Dataset Card for "email_subject_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wi_locness | 2023-06-01T14:59:47.000Z | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"multilinguality:other-language-learner",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:other",
"grammatical-error-cor... | null | Write & Improve (Yannakoudakis et al., 2018) is an online web platform that assists non-native
English students with their writing. Specifically, students from around the world submit letters,
stories, articles and essays in response to various prompts, and the W&I system provides instant
feedback. Since W&I went live in 2014, W&I annotators have manually annotated some of these
submissions and assigned them a CEFR level. | @inproceedings{bryant-etal-2019-bea,
title = "The {BEA}-2019 Shared Task on Grammatical Error Correction",
author = "Bryant, Christopher and
Felice, Mariano and
Andersen, {\\O}istein E. and
Briscoe, Ted",
booktitle = "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
month = aug,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W19-4406",
doi = "10.18653/v1/W19-4406",
pages = "52--75",
abstract = "This paper reports on the BEA-2019 Shared Task on Grammatical Error Correction (GEC). As with the CoNLL-2014 shared task, participants are required to correct all types of errors in test data. One of the main contributions of the BEA-2019 shared task is the introduction of a new dataset, the Write{\\&}Improve+LOCNESS corpus, which represents a wider range of native and learner English levels and abilities. Another contribution is the introduction of tracks, which control the amount of annotated data available to participants. Systems are evaluated in terms of ERRANT F{\\_}0.5, which allows us to report a much wider range of performance statistics. The competition was hosted on Codalab and remains open for further submissions on the blind test set.",
} | null | 7 | 56 | ---
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
language:
- en
license:
- other
multilinguality:
- monolingual
- other-language-learner
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text2text-generation
task_ids: []
paperswithcode_id: locness-corpus
pretty_name: Cambridge English Write & Improve + LOCNESS
tags:
- grammatical-error-correction
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: userid
dtype: string
- name: cefr
dtype: string
- name: text
dtype: string
- name: edits
sequence:
- name: start
dtype: int32
- name: end
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 4375795
num_examples: 3000
- name: validation
num_bytes: 447055
num_examples: 300
download_size: 6120469
dataset_size: 4822850
- config_name: wi
features:
- name: id
dtype: string
- name: userid
dtype: string
- name: cefr
dtype: string
- name: text
dtype: string
- name: edits
sequence:
- name: start
dtype: int32
- name: end
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 4375795
num_examples: 3000
- name: validation
num_bytes: 447055
num_examples: 300
download_size: 6120469
dataset_size: 4822850
- config_name: locness
features:
- name: id
dtype: string
- name: cefr
dtype: string
- name: text
dtype: string
- name: edits
sequence:
- name: start
dtype: int32
- name: end
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 138176
num_examples: 50
download_size: 6120469
dataset_size: 138176
config_names:
- locness
- wi
---
# Dataset Card for Cambridge English Write & Improve + LOCNESS Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.cl.cam.ac.uk/research/nl/bea2019st/#data
- **Repository:**
- **Paper:** https://www.aclweb.org/anthology/W19-4406/
- **Leaderboard:** https://competitions.codalab.org/competitions/20228#results
- **Point of Contact:**
### Dataset Summary
Write & Improve (Yannakoudakis et al., 2018) is an online web platform that assists non-native English students with their writing. Specifically, students from around the world submit letters, stories, articles and essays in response to various prompts, and the W&I system provides instant feedback. Since W&I went live in 2014, W&I annotators have manually annotated some of these submissions and assigned them a CEFR level.
The LOCNESS corpus (Granger, 1998) consists of essays written by native English students. It was originally compiled by researchers at the Centre for English Corpus Linguistics at the University of Louvain. Since native English students also sometimes make mistakes, we asked the W&I annotators to annotate a subsection of LOCNESS so researchers can test the effectiveness of their systems on the full range of English levels and abilities.
### Supported Tasks and Leaderboards
Grammatical error correction (GEC) is the task of automatically correcting grammatical errors in text; e.g. [I follows his advices -> I followed his advice]. It can be used to not only help language learners improve their writing skills, but also alert native speakers to accidental mistakes or typos.
The aim of the task of this dataset is to correct all types of errors in written text. This includes grammatical, lexical and orthographical errors.
The following Codalab competition contains the latest leaderboard, along with information on how to submit to the withheld W&I+LOCNESS test set: https://competitions.codalab.org/competitions/20228
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
An example from the `wi` configuration:
```
{
'id': '1-140178',
'userid': '21251',
'cefr': 'A2.i',
'text': 'My town is a medium size city with eighty thousand inhabitants. It has a high density population because its small territory. Despite of it is an industrial city, there are many shops and department stores. I recommend visiting the artificial lake in the certer of the city which is surrounded by a park. Pasteries are very common and most of them offer the special dessert from the city. There are a comercial zone along the widest street of the city where you can find all kind of establishments: banks, bars, chemists, cinemas, pet shops, restaurants, fast food restaurants, groceries, travel agencies, supermarkets and others. Most of the shops have sales and offers at least three months of the year: January, June and August. The quality of the products and services are quite good, because there are a huge competition, however I suggest you taking care about some fakes or cheats.',
'edits': {
'start': [13, 77, 104, 126, 134, 256, 306, 375, 396, 402, 476, 484, 579, 671, 774, 804, 808, 826, 838, 850, 857, 862, 868],
'end': [24, 78, 104, 133, 136, 262, 315, 379, 399, 411, 480, 498, 588, 671, 777, 807, 810, 835, 845, 856, 861, 867, 873],
'text': ['medium-sized', '-', ' of', 'Although', '', 'center', None, 'of', 'is', 'commercial', 'kinds', 'businesses', 'grocers', ' in', 'is', 'is', '', '. However,', 'recommend', 'be', 'careful', 'of', '']
}
}
```
An example from the `locness` configuration:
```
{
'id': '7-5819177',
'cefr': 'N',
'text': 'Boxing is a common, well known and well loved sport amongst most countries in the world however it is also punishing, dangerous and disliked to the extent that many people want it banned, possibly with good reason.\nBoxing is a dangerous sport, there are relatively common deaths, tragic injuries and even disease. All professional boxers are at risk from being killed in his next fight. If not killed then more likely paralysed. There have been a number of cases in the last ten years of the top few boxers having tragic losses throughout their ranks. This is just from the elite few, and theres more from those below them.\nMore deaths would occur through boxing if it were banned. The sport would go underground, there would be no safety measures like gloves, a doctor, paramedics or early stopping of the fight if someone looked unable to continue. With this going on the people taking part will be dangerous, and on the streets. Dangerous dogs who were trained to kill and maim in similar underound dog fights have already proved deadly to innocent people, the new boxers could be even more at risk.\nOnce boxing is banned and no-one grows up knowing it as acceptable there will be no interest in boxing and hopefully less all round interest in violence making towns and cities much safer places to live in, there will be less fighting outside pubs and clubs and less violent attacks with little or no reason.\nchange the rules of boxing slightly would much improve the safety risks of the sport and not detract form the entertainment. There are all sorts of proposals, lighter and more cushioning gloves could be worn, ban punches to the head, headguards worn or make fights shorter, as most of the serious injuries occur in the latter rounds, these would all show off the boxers skill and tallent and still be entertaining to watch.\nEven if a boxer is a success and manages not to be seriously hurt he still faces serious consequences in later life diseases that attack the brains have been known to set in as a direct result of boxing, even Muhamed Ali, who was infamous(?) both for his boxing and his quick-witted intelligence now has Alzheimer disease and can no longer do many everyday acts.\nMany other sports are more dangerous than boxing, motor sports and even mountaineering has risks that are real. Boxers chose to box, just as racing drivers drive.',
'edits': {
'start': [24, 39, 52, 87, 242, 371, 400, 528, 589, 713, 869, 992, 1058, 1169, 1209, 1219, 1255, 1308, 1386, 1412, 1513, 1569, 1661, 1731, 1744, 1781, 1792, 1901, 1951, 2038, 2131, 2149, 2247, 2286],
'end': [25, 40, 59, 95, 249, 374, 400, 538, 595, 713, 869, 1001, 1063, 1169, 1209, 1219, 1255, 1315, 1390, 1418, 1517, 1570, 1661, 1737, 1751, 1781, 1799, 1901, 1960, 2044, 2131, 2149, 2248, 2289],
'text': ['-', '-', 'in', '. However,', '. There', 'their', ',', 'among', "there's", ' and', ',', 'underground', '. The', ',', ',', ',', ',', '. There', 'for', 'Changing', 'from', ';', ',', 'later', '. These', "'", 'talent', ',', '. Diseases', '. Even', ',', "'s", ';', 'have']
}
}
```
### Data Fields
The fields of the dataset are:
- `id`: the id of the text as a string
- `cefr`: the [CEFR level](https://www.cambridgeenglish.org/exams-and-tests/cefr/) of the text as a string
- `userid`: id of the user
- `text`: the text of the submission as a string
- `edits`: the edits from W&I:
- `start`: start indexes of each edit as a list of integers
- `end`: end indexes of each edit as a list of integers
- `text`: the text content of each edit as a list of strings
- `from`: the original text of each edit as a list of strings
### Data Splits
| name |train|validation|
|----------|----:|---------:|
| wi | 3000| 300|
| locness | N/A| 50|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Write & Improve License:
```
Cambridge English Write & Improve (CEWI) Dataset Licence Agreement
1. By downloading this dataset and licence, this licence agreement is
entered into, effective this date, between you, the Licensee, and the
University of Cambridge, the Licensor.
2. Copyright of the entire licensed dataset is held by the Licensor.
No ownership or interest in the dataset is transferred to the
Licensee.
3. The Licensor hereby grants the Licensee a non-exclusive
non-transferable right to use the licensed dataset for
non-commercial research and educational purposes.
4. Non-commercial purposes exclude without limitation any use of the
licensed dataset or information derived from the dataset for or as
part of a product or service which is sold, offered for sale,
licensed, leased or rented.
5. The Licensee shall acknowledge use of the licensed dataset in all
publications of research based on it, in whole or in part, through
citation of the following publication:
Helen Yannakoudakis, Øistein E. Andersen, Ardeshir Geranpayeh,
Ted Briscoe and Diane Nicholls. 2018. Developing an automated writing
placement system for ESL learners. Applied Measurement in Education.
6. The Licensee may publish excerpts of less than 100 words from the
licensed dataset pursuant to clause 3.
7. The Licensor grants the Licensee this right to use the licensed dataset
"as is". Licensor does not make, and expressly disclaims, any express or
implied warranties, representations or endorsements of any kind
whatsoever.
8. This Agreement shall be governed by and construed in accordance with
the laws of England and the English courts shall have exclusive
jurisdiction.
```
LOCNESS License:
```
LOCNESS Dataset Licence Agreement
1. The corpus is to be used for non-commercial purposes only
2. All publications on research partly or wholly based on the corpus should give credit to the Centre for English Corpus Linguistics (CECL), Université catholique de Louvain, Belgium. A scanned copy or offprint of the publication should also be sent to <sylviane.granger@uclouvain.be>.
3. No part of the corpus is to be distributed to a third party without specific authorization from CECL. The corpus can only be used by the person agreeing to the licence terms and researchers working in close collaboration with him/her or students under his/her supervision, attached to the same institution, within the framework of the research project.
```
### Citation Information
```
@inproceedings{bryant-etal-2019-bea,
title = "The {BEA}-2019 Shared Task on Grammatical Error Correction",
author = "Bryant, Christopher and
Felice, Mariano and
Andersen, {\O}istein E. and
Briscoe, Ted",
booktitle = "Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications",
month = aug,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/W19-4406",
doi = "10.18653/v1/W19-4406",
pages = "52--75",
abstract = "This paper reports on the BEA-2019 Shared Task on Grammatical Error Correction (GEC). As with the CoNLL-2014 shared task, participants are required to correct all types of errors in test data. One of the main contributions of the BEA-2019 shared task is the introduction of a new dataset, the Write{\&}Improve+LOCNESS corpus, which represents a wider range of native and learner English levels and abilities. Another contribution is the introduction of tracks, which control the amount of annotated data available to participants. Systems are evaluated in terms of ERRANT F{\_}0.5, which allows us to report a much wider range of performance statistics. The competition was hosted on Codalab and remains open for further submissions on the blind test set.",
}
```
### Contributions
Thanks to [@aseifert](https://github.com/aseifert) for adding this dataset. |
mozilla-foundation/common_voice_3_0 | 2023-07-29T15:59:59.000Z | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"source_datasets:extended|common_voice",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | mozilla-foundation | null | @inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
} | null | 0 | 56 | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
license:
- cc0-1.0
multilinguality:
- multilingual
size_categories:
br:
- 10K<n<100K
ca:
- 10K<n<100K
cnh:
- 1K<n<10K
cv:
- 1K<n<10K
cy:
- 10K<n<100K
de:
- 100K<n<1M
dv:
- 1K<n<10K
en:
- 100K<n<1M
eo:
- 10K<n<100K
es:
- 10K<n<100K
et:
- 1K<n<10K
eu:
- 10K<n<100K
fa:
- 10K<n<100K
fr:
- 100K<n<1M
ga-IE:
- 1K<n<10K
it:
- 10K<n<100K
kab:
- 100K<n<1M
ky:
- 10K<n<100K
mn:
- 1K<n<10K
nl:
- 10K<n<100K
ru:
- 10K<n<100K
rw:
- 1K<n<10K
sah:
- 1K<n<10K
sl:
- 1K<n<10K
sv-SE:
- 1K<n<10K
tr:
- 1K<n<10K
tt:
- 10K<n<100K
zh-CN:
- 1K<n<10K
zh-TW:
- 10K<n<100K
source_datasets:
- extended|common_voice
paperswithcode_id: common-voice
pretty_name: Common Voice Corpus 3
language_bcp47:
- br
- ca
- cnh
- cv
- cy
- de
- dv
- en
- eo
- es
- et
- eu
- fa
- fr
- ga-IE
- it
- kab
- ky
- mn
- nl
- ru
- rw
- sah
- sl
- sv-SE
- tr
- tt
- zh-CN
- zh-TW
extra_gated_prompt: By clicking on “Access repository” below, you also agree to not
attempt to determine the identity of speakers in the Common Voice dataset.
task_categories:
- automatic-speech-recognition
---
# Dataset Card for Common Voice Corpus 3
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Anton Lozhkov](mailto:anton@huggingface.co)
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 2454 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 1979 validated hours in 29 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Supported Tasks and Leaderboards
The results for models trained on the Common Voice datasets are available via the
[🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
### Languages
```
Basque, Breton, Catalan, Chinese (China), Chinese (Taiwan), Chuvash, Dhivehi, Dutch, English, Esperanto, Estonian, French, German, Hakha Chin, Irish, Italian, Kabyle, Kinyarwanda, Kyrgyz, Mongolian, Persian, Russian, Sakha, Slovenian, Spanish, Swedish, Tatar, Turkish, Welsh
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_3_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
|
ShapeNet/ShapeNetCore | 2023-09-20T15:05:48.000Z | [
"language:en",
"license:other",
"3D shapes",
"region:us"
] | ShapeNet | null | null | null | 8 | 56 | ---
language:
- en
pretty_name: ShapeNetCore
tags:
- 3D shapes
license: other
extra_gated_heading: Acknowledge license to accept the repository
extra_gated_prompt: >-
To request access to this ShapeNet repo, you will need to provide your **full name** (please provide both your first and last name), the name of your **advisor or the principal investigator (PI)** of your lab (in the PI/Advisor) fields, and the **school or company** that you are affiliated with (the **Affiliation** field).
After requesting access to this ShapeNet repo, you will be considered for access approval.
After access approval, you (the "Researcher") receive permission to use the ShapeNet database (the "Database") at Princeton University and Stanford University. In exchange for being able to join the ShapeNet community and receive such permission, Researcher hereby agrees to the following terms and conditions:
Researcher shall use the Database only for non-commercial research and educational purposes.
Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.
Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify Princeton University and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted 3D models that he or she may create from the Database.
Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.
Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time.
If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.
The law of the State of New Jersey shall apply to all disputes under this agreement.
For access to the data, please fill in your **full name** (both first and last name), the name of your **advisor or principal investigator (PI)**, and the name of the **school or company** you are affliated with.
Please actually fill out the fields (DO NOT put the word "Advisor" for PI/Advisor and the word "School" for "Affiliation", please specify the name of your advisor and the name of your school).
extra_gated_fields:
Name: text
PI/Advisor: text
Affiliation: text
Purpose: text
Country: text
I agree to use this dataset for non-commercial use ONLY: checkbox
---
This repository contains ShapeNetCore (v2), a subset of [ShapeNet](https://shapenet.org).
ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore are linked to an appropriate synset in [WordNet 3.0](https://wordnet.princeton.edu/).
Please see [DATA.md](DATA.md) for details about the data.
If you use ShapeNet data, you agree to abide by the [ShapeNet terms of use](https://shapenet.org/terms). You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions.
If you use this data, please cite the main ShapeNet technical report.
```
@techreport{shapenet2015,
title = {{ShapeNet: An Information-Rich 3D Model Repository}},
author = {Chang, Angel X. and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and Song, Shuran and Su, Hao and Xiao, Jianxiong and Yi, Li and Yu, Fisher},
number = {arXiv:1512.03012 [cs.GR]},
institution = {Stanford University --- Princeton University --- Toyota Technological Institute at Chicago},
year = {2015}
}
```
For more information, please contact us at shapenetwebmaster@gmail.com and indicate ShapeNetCore v2 in the title of your email.
|
millawell/wikipedia_field_of_science | 2022-10-11T08:26:28.000Z | [
"license:cc-by-sa-3.0",
"region:us"
] | millawell | null | null | null | 5 | 56 | ---
license: cc-by-sa-3.0
---
|
Rosenberg/genia | 2022-10-23T12:08:03.000Z | [
"license:mit",
"region:us"
] | Rosenberg | null | null | null | 2 | 56 | ---
license: mit
---
|
Dizex/FoodBase | 2022-10-31T12:48:53.000Z | [
"region:us"
] | Dizex | null | null | null | 0 | 56 | ---
dataset_info:
features:
- name: nltk_tokens
sequence: string
- name: iob_tags
sequence: string
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 2040036
num_examples: 600
- name: val
num_bytes: 662190
num_examples: 200
download_size: 353747
dataset_size: 2702226
---
# Dataset Card for "FoodBase"
Dataset for FoodBase corpus introduced in [this paper](https://academic.oup.com/database/article/doi/10.1093/database/baz121/5611291).
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DFKI-SLT/cross_re | 2023-01-19T09:19:12.000Z | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|cross_ner",
"language:en",
"cross domain",
"ai",
"news",
"mus... | DFKI-SLT | CrossRE is a new, freely-available crossdomain benchmark for RE, which comprises six distinct text domains and includes
multilabel annotations. It includes the following domains: news, politics, natural science, music, literature and
artificial intelligence. The semantic relations are annotated on top of CrossNER (Liu et al., 2021), a cross-domain
dataset for NER which contains domain-specific entity types.
The dataset contains 17 relation labels for the six domains: PART-OF, PHYSICAL, USAGE, ROLE, SOCIAL,
GENERAL-AFFILIATION, COMPARE, TEMPORAL, ARTIFACT, ORIGIN, TOPIC, OPPOSITE, CAUSE-EFFECT, WIN-DEFEAT, TYPEOF, NAMED, and
RELATED-TO.
For details, see the paper: https://arxiv.org/abs/2210.09345 | @inproceedings{bassignana-plank-2022-crossre,
title = "Cross{RE}: A {C}ross-{D}omain {D}ataset for {R}elation {E}xtraction",
author = "Bassignana, Elisa and Plank, Barbara",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
year = "2022",
publisher = "Association for Computational Linguistics"
} | null | 0 | 56 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: CrossRE is a cross-domain dataset for relation extraction
size_categories:
- 10K<n<100K
source_datasets:
- extended|cross_ner
tags:
- cross domain
- ai
- news
- music
- literature
- politics
- science
task_categories:
- text-classification
task_ids:
- multi-class-classification
dataset_info:
- config_name: ai
features:
- name: doc_key
dtype: string
- name: sentence
sequence: string
- name: ner
sequence:
- name: id-start
dtype: int32
- name: id-end
dtype: int32
- name: entity-type
dtype: string
- name: relations
sequence:
- name: id_1-start
dtype: int32
- name: id_1-end
dtype: int32
- name: id_2-start
dtype: int32
- name: id_2-end
dtype: int32
- name: relation-type
dtype: string
- name: Exp
dtype: string
- name: Un
dtype: bool
- name: SA
dtype: bool
splits:
- name: train
num_bytes: 62411
num_examples: 100
- name: validation
num_bytes: 183717
num_examples: 350
- name: test
num_bytes: 217353
num_examples: 431
download_size: 508107
dataset_size: 463481
- config_name: literature
features:
- name: doc_key
dtype: string
- name: sentence
sequence: string
- name: ner
sequence:
- name: id-start
dtype: int32
- name: id-end
dtype: int32
- name: entity-type
dtype: string
- name: relations
sequence:
- name: id_1-start
dtype: int32
- name: id_1-end
dtype: int32
- name: id_2-start
dtype: int32
- name: id_2-end
dtype: int32
- name: relation-type
dtype: string
- name: Exp
dtype: string
- name: Un
dtype: bool
- name: SA
dtype: bool
splits:
- name: train
num_bytes: 62699
num_examples: 100
- name: validation
num_bytes: 246214
num_examples: 400
- name: test
num_bytes: 264450
num_examples: 416
download_size: 635130
dataset_size: 573363
- config_name: music
features:
- name: doc_key
dtype: string
- name: sentence
sequence: string
- name: ner
sequence:
- name: id-start
dtype: int32
- name: id-end
dtype: int32
- name: entity-type
dtype: string
- name: relations
sequence:
- name: id_1-start
dtype: int32
- name: id_1-end
dtype: int32
- name: id_2-start
dtype: int32
- name: id_2-end
dtype: int32
- name: relation-type
dtype: string
- name: Exp
dtype: string
- name: Un
dtype: bool
- name: SA
dtype: bool
splits:
- name: train
num_bytes: 69846
num_examples: 100
- name: validation
num_bytes: 261497
num_examples: 350
- name: test
num_bytes: 312165
num_examples: 399
download_size: 726956
dataset_size: 643508
- config_name: news
features:
- name: doc_key
dtype: string
- name: sentence
sequence: string
- name: ner
sequence:
- name: id-start
dtype: int32
- name: id-end
dtype: int32
- name: entity-type
dtype: string
- name: relations
sequence:
- name: id_1-start
dtype: int32
- name: id_1-end
dtype: int32
- name: id_2-start
dtype: int32
- name: id_2-end
dtype: int32
- name: relation-type
dtype: string
- name: Exp
dtype: string
- name: Un
dtype: bool
- name: SA
dtype: bool
splits:
- name: train
num_bytes: 49102
num_examples: 164
- name: validation
num_bytes: 77952
num_examples: 350
- name: test
num_bytes: 96301
num_examples: 400
download_size: 239763
dataset_size: 223355
- config_name: politics
features:
- name: doc_key
dtype: string
- name: sentence
sequence: string
- name: ner
sequence:
- name: id-start
dtype: int32
- name: id-end
dtype: int32
- name: entity-type
dtype: string
- name: relations
sequence:
- name: id_1-start
dtype: int32
- name: id_1-end
dtype: int32
- name: id_2-start
dtype: int32
- name: id_2-end
dtype: int32
- name: relation-type
dtype: string
- name: Exp
dtype: string
- name: Un
dtype: bool
- name: SA
dtype: bool
splits:
- name: train
num_bytes: 76004
num_examples: 101
- name: validation
num_bytes: 277633
num_examples: 350
- name: test
num_bytes: 295294
num_examples: 400
download_size: 726427
dataset_size: 648931
- config_name: science
features:
- name: doc_key
dtype: string
- name: sentence
sequence: string
- name: ner
sequence:
- name: id-start
dtype: int32
- name: id-end
dtype: int32
- name: entity-type
dtype: string
- name: relations
sequence:
- name: id_1-start
dtype: int32
- name: id_1-end
dtype: int32
- name: id_2-start
dtype: int32
- name: id_2-end
dtype: int32
- name: relation-type
dtype: string
- name: Exp
dtype: string
- name: Un
dtype: bool
- name: SA
dtype: bool
splits:
- name: train
num_bytes: 63876
num_examples: 103
- name: validation
num_bytes: 224402
num_examples: 351
- name: test
num_bytes: 249075
num_examples: 400
download_size: 594058
dataset_size: 537353
---
# Dataset Card for CrossRE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [CrossRE](https://github.com/mainlp/CrossRE)
- **Paper:** [CrossRE: A Cross-Domain Dataset for Relation Extraction](https://arxiv.org/abs/2210.09345)
### Dataset Summary
CrossRE is a new, freely-available crossdomain benchmark for RE, which comprises six distinct text domains and includes
multilabel annotations. It includes the following domains: news, politics, natural science, music, literature and
artificial intelligence. The semantic relations are annotated on top of CrossNER (Liu et al., 2021), a cross-domain
dataset for NER which contains domain-specific entity types.
The dataset contains 17 relation labels for the six domains: PART-OF, PHYSICAL, USAGE, ROLE, SOCIAL,
GENERAL-AFFILIATION, COMPARE, TEMPORAL, ARTIFACT, ORIGIN, TOPIC, OPPOSITE, CAUSE-EFFECT, WIN-DEFEAT, TYPEOF, NAMED, and
RELATED-TO.
For details, see the paper: https://arxiv.org/abs/2210.09345
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language data in CrossRE is in English (BCP-47 en)
## Dataset Structure
### Data Instances
#### news
- **Size of downloaded dataset files:** 0.24 MB
- **Size of the generated dataset:** 0.22 MB
An example of 'train' looks as follows:
```python
{
"doc_key": "news-train-1",
"sentence": ["EU", "rejects", "German", "call", "to", "boycott", "British", "lamb", "."],
"ner": [
{"id-start": 0, "id-end": 0, "entity-type": "organisation"},
{"id-start": 2, "id-end": 3, "entity-type": "misc"},
{"id-start": 6, "id-end": 7, "entity-type": "misc"}
],
"relations": [
{"id_1-start": 0, "id_1-end": 0, "id_2-start": 2, "id_2-end": 3, "relation-type": "opposite", "Exp": "rejects", "Un": False, "SA": False},
{"id_1-start": 2, "id_1-end": 3, "id_2-start": 6, "id_2-end": 7, "relation-type": "opposite", "Exp": "calls_for_boycot_of", "Un": False, "SA": False},
{"id_1-start": 2, "id_1-end": 3, "id_2-start": 6, "id_2-end": 7, "relation-type": "topic", "Exp": "", "Un": False, "SA": False}
]
}
```
#### politics
- **Size of downloaded dataset files:** 0.73 MB
- **Size of the generated dataset:** 0.65 MB
An example of 'train' looks as follows:
```python
{
"doc_key": "politics-train-1",
"sentence": ["Parties", "with", "mainly", "Eurosceptic", "views", "are", "the", "ruling", "United", "Russia", ",", "and", "opposition", "parties", "the", "Communist", "Party", "of", "the", "Russian", "Federation", "and", "Liberal", "Democratic", "Party", "of", "Russia", "."],
"ner": [
{"id-start": 8, "id-end": 9, "entity-type": "politicalparty"},
{"id-start": 15, "id-end": 20, "entity-type": "politicalparty"},
{"id-start": 22, "id-end": 26, "entity-type": "politicalparty"}
],
"relations": [
{"id_1-start": 8, "id_1-end": 9, "id_2-start": 15, "id_2-end": 20, "relation-type": "opposite", "Exp": "in_opposition", "Un": False, "SA": False},
{"id_1-start": 8, "id_1-end": 9, "id_2-start": 22, "id_2-end": 26, "relation-type": "opposite", "Exp": "in_opposition", "Un": False, "SA": False}
]
}
```
#### science
- **Size of downloaded dataset files:** 0.59 MB
- **Size of the generated dataset:** 0.54 MB
An example of 'train' looks as follows:
```python
{
"doc_key": "science-train-1",
"sentence": ["They", "may", "also", "use", "Adenosine", "triphosphate", ",", "Nitric", "oxide", ",", "and", "ROS", "for", "signaling", "in", "the", "same", "ways", "that", "animals", "do", "."],
"ner": [
{"id-start": 4, "id-end": 5, "entity-type": "chemicalcompound"},
{"id-start": 7, "id-end": 8, "entity-type": "chemicalcompound"},
{"id-start": 11, "id-end": 11, "entity-type": "chemicalcompound"}
],
"relations": []
}
```
#### music
- **Size of downloaded dataset files:** 0.73 MB
- **Size of the generated dataset:** 0.64 MB
An example of 'train' looks as follows:
```python
{
"doc_key": "music-train-1",
"sentence": ["In", "2003", ",", "the", "Stade", "de", "France", "was", "the", "primary", "site", "of", "the", "2003", "World", "Championships", "in", "Athletics", "."],
"ner": [
{"id-start": 4, "id-end": 6, "entity-type": "location"},
{"id-start": 13, "id-end": 17, "entity-type": "event"}
],
"relations": [
{"id_1-start": 13, "id_1-end": 17, "id_2-start": 4, "id_2-end": 6, "relation-type": "physical", "Exp": "", "Un": False, "SA": False}
]
}
```
#### literature
- **Size of downloaded dataset files:** 0.64 MB
- **Size of the generated dataset:** 0.57 MB
An example of 'train' looks as follows:
```python
{
"doc_key": "literature-train-1",
"sentence": ["In", "1351", ",", "during", "the", "reign", "of", "Emperor", "Toghon", "Temür", "of", "the", "Yuan", "dynasty", ",", "93rd-generation", "descendant", "Kong", "Huan", "(", "孔浣", ")", "'", "s", "2nd", "son", "Kong", "Shao", "(", "孔昭", ")", "moved", "from", "China", "to", "Korea", "during", "the", "Goryeo", ",", "and", "was", "received", "courteously", "by", "Princess", "Noguk", "(", "the", "Mongolian-born", "wife", "of", "the", "future", "king", "Gongmin", ")", "."],
"ner": [
{"id-start": 7, "id-end": 9, "entity-type": "person"},
{"id-start": 12, "id-end": 13, "entity-type": "country"},
{"id-start": 17, "id-end": 18, "entity-type": "writer"},
{"id-start": 20, "id-end": 20, "entity-type": "writer"},
{"id-start": 26, "id-end": 27, "entity-type": "writer"},
{"id-start": 29, "id-end": 29, "entity-type": "writer"},
{"id-start": 33, "id-end": 33, "entity-type": "country"},
{"id-start": 35, "id-end": 35, "entity-type": "country"},
{"id-start": 38, "id-end": 38, "entity-type": "misc"},
{"id-start": 45, "id-end": 46, "entity-type": "person"},
{"id-start": 49, "id-end": 50, "entity-type": "misc"},
{"id-start": 55, "id-end": 55, "entity-type": "person"}
],
"relations": [
{"id_1-start": 7, "id_1-end": 9, "id_2-start": 12, "id_2-end": 13, "relation-type": "role", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 7, "id_1-end": 9, "id_2-start": 12, "id_2-end": 13, "relation-type": "temporal", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 17, "id_1-end": 18, "id_2-start": 26, "id_2-end": 27, "relation-type": "social", "Exp": "family", "Un": False, "SA": False},
{"id_1-start": 20, "id_1-end": 20, "id_2-start": 17, "id_2-end": 18, "relation-type": "named", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 26, "id_1-end": 27, "id_2-start": 33, "id_2-end": 33, "relation-type": "physical", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 26, "id_1-end": 27, "id_2-start": 35, "id_2-end": 35, "relation-type": "physical", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 26, "id_1-end": 27, "id_2-start": 38, "id_2-end": 38, "relation-type": "temporal", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 26, "id_1-end": 27, "id_2-start": 45, "id_2-end": 46, "relation-type": "social", "Exp": "greeted_by", "Un": False, "SA": False},
{"id_1-start": 29, "id_1-end": 29, "id_2-start": 26, "id_2-end": 27, "relation-type": "named", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 45, "id_1-end": 46, "id_2-start": 55, "id_2-end": 55, "relation-type": "social", "Exp": "marriage", "Un": False, "SA": False},
{"id_1-start": 49, "id_1-end": 50, "id_2-start": 45, "id_2-end": 46, "relation-type": "named", "Exp": "", "Un": False, "SA": False}
]
}
```
#### ai
- **Size of downloaded dataset files:** 0.51 MB
- **Size of the generated dataset:** 0.46 MB
An example of 'train' looks as follows:
```python
{
"doc_key": "ai-train-1",
"sentence": ["Popular", "approaches", "of", "opinion-based", "recommender", "system", "utilize", "various", "techniques", "including", "text", "mining", ",", "information", "retrieval", ",", "sentiment", "analysis", "(", "see", "also", "Multimodal", "sentiment", "analysis", ")", "and", "deep", "learning", "X.Y.", "Feng", ",", "H.", "Zhang", ",", "Y.J.", "Ren", ",", "P.H.", "Shang", ",", "Y.", "Zhu", ",", "Y.C.", "Liang", ",", "R.C.", "Guan", ",", "D.", "Xu", ",", "(", "2019", ")", ",", ",", "21", "(", "5", ")", ":", "e12957", "."],
"ner": [
{"id-start": 3, "id-end": 5, "entity-type": "product"},
{"id-start": 10, "id-end": 11, "entity-type": "field"},
{"id-start": 13, "id-end": 14, "entity-type": "task"},
{"id-start": 16, "id-end": 17, "entity-type": "task"},
{"id-start": 21, "id-end": 23, "entity-type": "task"},
{"id-start": 26, "id-end": 27, "entity-type": "field"},
{"id-start": 28, "id-end": 29, "entity-type": "researcher"},
{"id-start": 31, "id-end": 32, "entity-type": "researcher"},
{"id-start": 34, "id-end": 35, "entity-type": "researcher"},
{"id-start": 37, "id-end": 38, "entity-type": "researcher"},
{"id-start": 40, "id-end": 41, "entity-type": "researcher"},
{"id-start": 43, "id-end": 44, "entity-type": "researcher"},
{"id-start": 46, "id-end": 47, "entity-type": "researcher"},
{"id-start": 49, "id-end": 50, "entity-type": "researcher"}
],
"relations": [
{"id_1-start": 3, "id_1-end": 5, "id_2-start": 10, "id_2-end": 11, "relation-type": "part-of", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 3, "id_1-end": 5, "id_2-start": 10, "id_2-end": 11, "relation-type": "usage", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 3, "id_1-end": 5, "id_2-start": 13, "id_2-end": 14, "relation-type": "part-of", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 3, "id_1-end": 5, "id_2-start": 13, "id_2-end": 14, "relation-type": "usage", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 3, "id_1-end": 5, "id_2-start": 16, "id_2-end": 17, "relation-type": "part-of", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 3, "id_1-end": 5, "id_2-start": 16, "id_2-end": 17, "relation-type": "usage", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 3, "id_1-end": 5, "id_2-start": 26, "id_2-end": 27, "relation-type": "part-of", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 3, "id_1-end": 5, "id_2-start": 26, "id_2-end": 27, "relation-type": "usage", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 21, "id_1-end": 23, "id_2-start": 16, "id_2-end": 17, "relation-type": "part-of", "Exp": "", "Un": False, "SA": False},
{"id_1-start": 21, "id_1-end": 23, "id_2-start": 16, "id_2-end": 17, "relation-type": "type-of", "Exp": "", "Un": False, "SA": False}
]
}
```
### Data Fields
The data fields are the same among all splits.
- `doc_key`: the instance id of this sentence, a `string` feature.
- `sentence`: the list of tokens of this sentence, obtained with spaCy, a `list` of `string` features.
- `ner`: the list of named entities in this sentence, a `list` of `dict` features.
- `id-start`: the start index of the entity, a `int` feature.
- `id-end`: the end index of the entity, a `int` feature.
- `entity-type`: the type of the entity, a `string` feature.
- `relations`: the list of relations in this sentence, a `list` of `dict` features.
- `id_1-start`: the start index of the first entity, a `int` feature.
- `id_1-end`: the end index of the first entity, a `int` feature.
- `id_2-start`: the start index of the second entity, a `int` feature.
- `id_2-end`: the end index of the second entity, a `int` feature.
- `relation-type`: the type of the relation, a `string` feature.
- `Exp`: the explanation of the relation type assigned, a `string` feature.
- `Un`: uncertainty of the annotator, a `bool` feature.
- `SA`: existence of syntax ambiguity which poses a challenge for the annotator, a `bool` feature.
### Data Splits
#### Sentences
| | Train | Dev | Test | Total |
|--------------|---------|---------|---------|---------|
| news | 164 | 350 | 400 | 914 |
| politics | 101 | 350 | 400 | 851 |
| science | 103 | 351 | 400 | 854 |
| music | 100 | 350 | 399 | 849 |
| literature | 100 | 400 | 416 | 916 |
| ai | 100 | 350 | 431 | 881 |
| ------------ | ------- | ------- | ------- | ------- |
| total | 668 | 2,151 | 2,46 | 5,265 |
#### Relations
| | Train | Dev | Test | Total |
|--------------|---------|---------|---------|---------|
| news | 175 | 300 | 396 | 871 |
| politics | 502 | 1,616 | 1,831 | 3,949 |
| science | 355 | 1,340 | 1,393 | 3,088 |
| music | 496 | 1,861 | 2,333 | 4,690 |
| literature | 397 | 1,539 | 1,591 | 3,527 |
| ai | 350 | 1,006 | 1,127 | 2,483 |
| ------------ | ------- | ------- | ------- | ------- |
| total | 2,275 | 7,662 | 8,671 | 18,608 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@inproceedings{bassignana-plank-2022-crossre,
title = "Cross{RE}: A {C}ross-{D}omain {D}ataset for {R}elation {E}xtraction",
author = "Bassignana, Elisa and Plank, Barbara",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
year = "2022",
publisher = "Association for Computational Linguistics"
}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. |
theblackcat102/codex-math-qa | 2023-03-26T01:04:18.000Z | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"language:en",
"license:other",
"codex-generated",
"code",
"mathematic",
"region:us"
] | theblackcat102 | Solution by codex-davinci-002 for math_qa | \ | null | 13 | 56 | ---
license: other
task_categories:
- text2text-generation
- text-generation
language:
- en
tags:
- codex-generated
- code
- mathematic
---
# Codex Math QA
Solve math_qa using codex-davinci-002 via Python Programming.
[Since OpenAI decided to shut off code-davinci-002 behind Azure](https://mobile.twitter.com/Veqtor/status/1638081493099597824), this dataset aims to share the generation results for code-davinci-002 OpenAI's 176B code generation model.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|main|25065| 4133|2985|
|rational| - | 4151|2985|
1. Basic prompts : only question
```
dataset = load_dataset("theblackcat102/codex-math-qa", "main")
```
**Example :**
```
the average weight of 8 person ' s increases by 1.5 kg when a new person comes in place of one of them weighing 75 kg. what might be the weight of the new person ?
Write a short snippet of python function which solve this problem. No need to explain the answer, let the code output the answer.
```
output
```python
def average_weight(weights):
total_weight = sum(weights)
return total_weight / len(weights)
def new_person_weight(weights):
average_weight = average_weight(weights)
new_person_weight = average_weight + 1.5
return new_person_weight
weights = [75, 80, 85, 90, 95]
print(new_person_weight(weights))
```
2. Prompts with rationale : include the `Rationale` text which should helps the programming planinig
```
dataset = load_dataset("theblackcat102/codex-math-qa", "rational")
```
**Example :**
```
one copy machine can make 20 copies a minute, and a second copy machine makes 10 copies a minute. if the two copiers work together, how long would it take them to make 900 copies ?
Rationale: "total work done by both machines in a minute = 20 + 10 = 30 copies total number of copies required = 900 time = 900 / 30 = 30 mins answer d"
Write a short snippet of python function which solve this problem within 30 lines of code. Add comments in the function block for explaination.
```
output
```python
def copy_machine(copies, copy_speed):
"""
:param copies: number of copies required
:param copy_speed: speed of the copy machine
:return: time in minutes
"""
total_work = sum(copy_speed)
time = copies / total_work
return time
if __name__ == "__main__":
print(copy_machine(900, [20, 10]))
```
### Notes:
The generated results are unvalidated and are as what it is from the codex-davinci-002 outputs. So there's a majority of answers which is incorrect and code with syntax error. However, this is a work for a future study and the aim of this dataset was to provide a source or reference for code based math answering by codex-davinci-002.
## Dataset Creation
Dataset was sourced from [math_qa](https://huggingface.co/datasets/math_qa) and append prompts at the end of section for generating Python solutions for the answer. This is an aim for providing dataset for the work offload seem in galactica
The generation config for code-davinci-02 are as follows:
| name | value|
|-------|----:|
|max_tokens| 2048 |
|temperature| 0.5 |
|top_p| 0.7 |
### Citation Information
```
@inproceedings{amini-etal-2019-mathqa,
title = "{M}ath{QA}: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms",
author = "Amini, Aida and
Gabriel, Saadia and
Lin, Shanchuan and
Koncel-Kedziorski, Rik and
Choi, Yejin and
Hajishirzi, Hannaneh",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N19-1245",
doi = "10.18653/v1/N19-1245",
pages = "2357--2367",
}
``` |
arbml/tashkeelav2 | 2023-04-09T03:59:07.000Z | [
"region:us"
] | arbml | null | null | null | 2 | 56 | ---
dataset_info:
features:
- name: diacratized
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 801916784.3611724
num_examples: 522463
- name: test
num_bytes: 89102717.63882759
num_examples: 58052
download_size: 416908597
dataset_size: 891019502.0
---
# Dataset Card for "tashkeelav2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
juletxara/xnli_mt | 2023-07-21T10:21:37.000Z | [
"language:en",
"region:us"
] | juletxara | XNLI is a subset of a few thousand examples from MNLI which has been translated
into a 14 different languages (some low-ish resource). As with MNLI, the goal is
to predict textual entailment (does sentence A imply/contradict/neither sentence
B) and is a classification task (given two sentences, predict one of three
labels). | @InProceedings{conneau2018xnli,
author = {Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin},
title = {XNLI: Evaluating Cross-lingual Sentence Representations},
booktitle = {Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing},
year = {2018},
publisher = {Association for Computational Linguistics},
location = {Brussels, Belgium},
} | null | 0 | 56 | ---
language:
- en
paperswithcode_id: xnli
pretty_name: Cross-lingual Natural Language Inference
dataset_info:
- config_name: nllb-200-distilled-600M
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 851225
num_examples: 5010
- name: bg
num_bytes: 860275
num_examples: 5010
- name: de
num_bytes: 852016
num_examples: 5010
- name: el
num_bytes: 852043
num_examples: 5010
- name: es
num_bytes: 862194
num_examples: 5010
- name: fr
num_bytes: 861464
num_examples: 5010
- name: hi
num_bytes: 839337
num_examples: 5010
- name: ru
num_bytes: 860117
num_examples: 5010
- name: sw
num_bytes: 829257
num_examples: 5010
- name: th
num_bytes: 845834
num_examples: 5010
- name: tr
num_bytes: 840611
num_examples: 5010
- name: ur
num_bytes: 829009
num_examples: 5010
- name: vi
num_bytes: 846643
num_examples: 5010
- name: zh
num_bytes: 851646
num_examples: 5010
download_size: 11040341
dataset_size: 11881671
- config_name: nllb-200-distilled-1.3B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 851205
num_examples: 5010
- name: bg
num_bytes: 857938
num_examples: 5010
- name: de
num_bytes: 849800
num_examples: 5010
- name: el
num_bytes: 849820
num_examples: 5010
- name: es
num_bytes: 860984
num_examples: 5010
- name: fr
num_bytes: 862545
num_examples: 5010
- name: hi
num_bytes: 848151
num_examples: 5010
- name: ru
num_bytes: 858069
num_examples: 5010
- name: sw
num_bytes: 830347
num_examples: 5010
- name: th
num_bytes: 841814
num_examples: 5010
- name: tr
num_bytes: 840738
num_examples: 5010
- name: ur
num_bytes: 828996
num_examples: 5010
- name: vi
num_bytes: 848990
num_examples: 5010
- name: zh
num_bytes: 855461
num_examples: 5010
download_size: 11043528
dataset_size: 11884858
- config_name: nllb-200-1.3B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 855256
num_examples: 5010
- name: bg
num_bytes: 861195
num_examples: 5010
- name: de
num_bytes: 854679
num_examples: 5010
- name: el
num_bytes: 852766
num_examples: 5010
- name: es
num_bytes: 863689
num_examples: 5010
- name: fr
num_bytes: 868360
num_examples: 5010
- name: hi
num_bytes: 846414
num_examples: 5010
- name: ru
num_bytes: 865308
num_examples: 5010
- name: sw
num_bytes: 830998
num_examples: 5010
- name: th
num_bytes: 846171
num_examples: 5010
- name: tr
num_bytes: 845907
num_examples: 5010
- name: ur
num_bytes: 838279
num_examples: 5010
- name: vi
num_bytes: 848249
num_examples: 5010
- name: zh
num_bytes: 846116
num_examples: 5010
download_size: 11082057
dataset_size: 11923387
- config_name: nllb-200-3.3B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 863302
num_examples: 5010
- name: bg
num_bytes: 863677
num_examples: 5010
- name: de
num_bytes: 857147
num_examples: 5010
- name: el
num_bytes: 856383
num_examples: 5010
- name: es
num_bytes: 866137
num_examples: 5010
- name: fr
num_bytes: 871853
num_examples: 5010
- name: hi
num_bytes: 857305
num_examples: 5010
- name: ru
num_bytes: 869523
num_examples: 5010
- name: sw
num_bytes: 839567
num_examples: 5010
- name: th
num_bytes: 850312
num_examples: 5010
- name: tr
num_bytes: 851657
num_examples: 5010
- name: ur
num_bytes: 832903
num_examples: 5010
- name: vi
num_bytes: 856479
num_examples: 5010
- name: zh
num_bytes: 853093
num_examples: 5010
download_size: 11148008
dataset_size: 11989338
- config_name: xglm-564M
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 789329
num_examples: 5010
- name: bg
num_bytes: 846003
num_examples: 5010
- name: de
num_bytes: 781577
num_examples: 5010
- name: el
num_bytes: 1069000
num_examples: 5010
- name: es
num_bytes: 852488
num_examples: 5010
- name: fr
num_bytes: 860951
num_examples: 5010
- name: hi
num_bytes: 849698
num_examples: 5010
- name: ru
num_bytes: 898706
num_examples: 5010
- name: sw
num_bytes: 842743
num_examples: 5010
- name: th
num_bytes: 1098847
num_examples: 5010
- name: tr
num_bytes: 788523
num_examples: 5010
- name: ur
num_bytes: 786383
num_examples: 5010
- name: vi
num_bytes: 827304
num_examples: 5010
- name: zh
num_bytes: 1083312
num_examples: 5010
download_size: 11533534
dataset_size: 12374864
- config_name: xglm-1.7B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 788487
num_examples: 5010
- name: bg
num_bytes: 863627
num_examples: 5010
- name: de
num_bytes: 824591
num_examples: 5010
- name: el
num_bytes: 870729
num_examples: 5010
- name: es
num_bytes: 856025
num_examples: 5010
- name: fr
num_bytes: 877381
num_examples: 5010
- name: hi
num_bytes: 973947
num_examples: 5010
- name: ru
num_bytes: 840252
num_examples: 5010
- name: sw
num_bytes: 784472
num_examples: 5010
- name: th
num_bytes: 821323
num_examples: 5010
- name: tr
num_bytes: 747863
num_examples: 5010
- name: ur
num_bytes: 855280
num_examples: 5010
- name: vi
num_bytes: 807745
num_examples: 5010
- name: zh
num_bytes: 801384
num_examples: 5010
download_size: 10871776
dataset_size: 11713106
- config_name: xglm-2.9B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 791983
num_examples: 5010
- name: bg
num_bytes: 856898
num_examples: 5010
- name: de
num_bytes: 833316
num_examples: 5010
- name: el
num_bytes: 859152
num_examples: 5010
- name: es
num_bytes: 875232
num_examples: 5010
- name: fr
num_bytes: 880335
num_examples: 5010
- name: hi
num_bytes: 754460
num_examples: 5010
- name: ru
num_bytes: 839486
num_examples: 5010
- name: sw
num_bytes: 807832
num_examples: 5010
- name: th
num_bytes: 792237
num_examples: 5010
- name: tr
num_bytes: 744151
num_examples: 5010
- name: ur
num_bytes: 763715
num_examples: 5010
- name: vi
num_bytes: 825575
num_examples: 5010
- name: zh
num_bytes: 803580
num_examples: 5010
download_size: 10586622
dataset_size: 11427952
- config_name: xglm-4.5B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 825461
num_examples: 5010
- name: bg
num_bytes: 861124
num_examples: 5010
- name: de
num_bytes: 847007
num_examples: 5010
- name: el
num_bytes: 875762
num_examples: 5010
- name: es
num_bytes: 871840
num_examples: 5010
- name: fr
num_bytes: 882720
num_examples: 5010
- name: hi
num_bytes: 826770
num_examples: 5010
- name: ru
num_bytes: 865706
num_examples: 5010
- name: sw
num_bytes: 807688
num_examples: 5010
- name: th
num_bytes: 827077
num_examples: 5010
- name: tr
num_bytes: 836039
num_examples: 5010
- name: ur
num_bytes: 799881
num_examples: 5010
- name: vi
num_bytes: 846648
num_examples: 5010
- name: zh
num_bytes: 836279
num_examples: 5010
download_size: 10968672
dataset_size: 11810002
- config_name: xglm-7.5B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 818748
num_examples: 5010
- name: bg
num_bytes: 853616
num_examples: 5010
- name: de
num_bytes: 833462
num_examples: 5010
- name: el
num_bytes: 860997
num_examples: 5010
- name: es
num_bytes: 855814
num_examples: 5010
- name: fr
num_bytes: 859597
num_examples: 5010
- name: hi
num_bytes: 788540
num_examples: 5010
- name: ru
num_bytes: 846308
num_examples: 5010
- name: sw
num_bytes: 813638
num_examples: 5010
- name: th
num_bytes: 793438
num_examples: 5010
- name: tr
num_bytes: 753138
num_examples: 5010
- name: ur
num_bytes: 811513
num_examples: 5010
- name: vi
num_bytes: 829040
num_examples: 5010
- name: zh
num_bytes: 823480
num_examples: 5010
download_size: 10699999
dataset_size: 11541329
- config_name: bloom-560m
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 793192
num_examples: 5010
- name: bg
num_bytes: 1293032
num_examples: 5026
- name: de
num_bytes: 853267
num_examples: 5011
- name: el
num_bytes: 853650
num_examples: 5028
- name: es
num_bytes: 790401
num_examples: 5019
- name: fr
num_bytes: 785706
num_examples: 5022
- name: hi
num_bytes: 815413
num_examples: 5020
- name: ru
num_bytes: 1119100
num_examples: 5035
- name: sw
num_bytes: 1283629
num_examples: 5010
- name: th
num_bytes: 1927388
num_examples: 5010
- name: tr
num_bytes: 1136397
num_examples: 5010
- name: ur
num_bytes: 806534
num_examples: 5050
- name: vi
num_bytes: 810195
num_examples: 5033
- name: zh
num_bytes: 895087
num_examples: 5013
download_size: 13312268
dataset_size: 14162991
- config_name: bloom-1b1
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 772035
num_examples: 5010
- name: bg
num_bytes: 838287
num_examples: 5010
- name: de
num_bytes: 816688
num_examples: 5010
- name: el
num_bytes: 757902
num_examples: 5010
- name: es
num_bytes: 811192
num_examples: 5010
- name: fr
num_bytes: 823552
num_examples: 5010
- name: hi
num_bytes: 755051
num_examples: 5010
- name: ru
num_bytes: 802154
num_examples: 5010
- name: sw
num_bytes: 769220
num_examples: 5010
- name: th
num_bytes: 855265
num_examples: 5010
- name: tr
num_bytes: 1009235
num_examples: 5010
- name: ur
num_bytes: 784984
num_examples: 5010
- name: vi
num_bytes: 798443
num_examples: 5010
- name: zh
num_bytes: 795561
num_examples: 5010
download_size: 10548239
dataset_size: 11389569
- config_name: bloom-1b7
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 817013
num_examples: 5010
- name: bg
num_bytes: 803575
num_examples: 5010
- name: de
num_bytes: 811977
num_examples: 5010
- name: el
num_bytes: 768757
num_examples: 5010
- name: es
num_bytes: 834218
num_examples: 5010
- name: fr
num_bytes: 844544
num_examples: 5010
- name: hi
num_bytes: 780516
num_examples: 5010
- name: ru
num_bytes: 856927
num_examples: 5010
- name: sw
num_bytes: 745814
num_examples: 5010
- name: th
num_bytes: 930774
num_examples: 5010
- name: tr
num_bytes: 871417
num_examples: 5010
- name: ur
num_bytes: 751069
num_examples: 5010
- name: vi
num_bytes: 814194
num_examples: 5010
- name: zh
num_bytes: 790631
num_examples: 5010
download_size: 10580096
dataset_size: 11421426
- config_name: bloom-3b
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 819238
num_examples: 5010
- name: bg
num_bytes: 822686
num_examples: 5010
- name: de
num_bytes: 850318
num_examples: 5010
- name: el
num_bytes: 809037
num_examples: 5010
- name: es
num_bytes: 850349
num_examples: 5010
- name: fr
num_bytes: 855581
num_examples: 5010
- name: hi
num_bytes: 797905
num_examples: 5010
- name: ru
num_bytes: 861096
num_examples: 5010
- name: sw
num_bytes: 767209
num_examples: 5010
- name: th
num_bytes: 820321
num_examples: 5010
- name: tr
num_bytes: 881668
num_examples: 5010
- name: ur
num_bytes: 810843
num_examples: 5010
- name: vi
num_bytes: 828926
num_examples: 5010
- name: zh
num_bytes: 793476
num_examples: 5010
download_size: 10727323
dataset_size: 11568653
- config_name: bloom-7b1
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 834767
num_examples: 5010
- name: bg
num_bytes: 848921
num_examples: 5010
- name: de
num_bytes: 827646
num_examples: 5010
- name: el
num_bytes: 886001
num_examples: 5010
- name: es
num_bytes: 859775
num_examples: 5010
- name: fr
num_bytes: 863548
num_examples: 5010
- name: hi
num_bytes: 814484
num_examples: 5010
- name: ru
num_bytes: 860392
num_examples: 5010
- name: sw
num_bytes: 811380
num_examples: 5010
- name: th
num_bytes: 775738
num_examples: 5010
- name: tr
num_bytes: 747961
num_examples: 5010
- name: ur
num_bytes: 836727
num_examples: 5010
- name: vi
num_bytes: 836042
num_examples: 5010
- name: zh
num_bytes: 814866
num_examples: 5010
download_size: 10776918
dataset_size: 11618248
- config_name: llama-7B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 792437
num_examples: 5010
- name: bg
num_bytes: 855365
num_examples: 5010
- name: de
num_bytes: 844453
num_examples: 5010
- name: el
num_bytes: 864748
num_examples: 5010
- name: es
num_bytes: 871358
num_examples: 5010
- name: fr
num_bytes: 882671
num_examples: 5010
- name: hi
num_bytes: 791631
num_examples: 5010
- name: ru
num_bytes: 853745
num_examples: 5010
- name: sw
num_bytes: 753655
num_examples: 5010
- name: th
num_bytes: 787365
num_examples: 5010
- name: tr
num_bytes: 814193
num_examples: 5010
- name: ur
num_bytes: 811987
num_examples: 5010
- name: vi
num_bytes: 807334
num_examples: 5010
- name: zh
num_bytes: 841441
num_examples: 5010
download_size: 10731053
dataset_size: 11572383
- config_name: llama-13B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 833799
num_examples: 5010
- name: bg
num_bytes: 850755
num_examples: 5010
- name: de
num_bytes: 842498
num_examples: 5010
- name: el
num_bytes: 853859
num_examples: 5010
- name: es
num_bytes: 865884
num_examples: 5010
- name: fr
num_bytes: 872326
num_examples: 5010
- name: hi
num_bytes: 803350
num_examples: 5010
- name: ru
num_bytes: 850066
num_examples: 5010
- name: sw
num_bytes: 785595
num_examples: 5010
- name: th
num_bytes: 794461
num_examples: 5010
- name: tr
num_bytes: 789769
num_examples: 5010
- name: ur
num_bytes: 813459
num_examples: 5010
- name: vi
num_bytes: 783219
num_examples: 5010
- name: zh
num_bytes: 828885
num_examples: 5010
download_size: 10726595
dataset_size: 11567925
- config_name: RedPajama-INCITE-Base-3B-v1
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 815395
num_examples: 5010
- name: bg
num_bytes: 870568
num_examples: 5010
- name: de
num_bytes: 830593
num_examples: 5010
- name: el
num_bytes: 887938
num_examples: 5010
- name: es
num_bytes: 866523
num_examples: 5010
- name: fr
num_bytes: 880668
num_examples: 5010
- name: hi
num_bytes: 871126
num_examples: 5010
- name: ru
num_bytes: 875379
num_examples: 5010
- name: sw
num_bytes: 775459
num_examples: 5010
- name: th
num_bytes: 829562
num_examples: 5010
- name: tr
num_bytes: 813161
num_examples: 5010
- name: ur
num_bytes: 812296
num_examples: 5010
- name: vi
num_bytes: 824340
num_examples: 5010
- name: zh
num_bytes: 892427
num_examples: 5010
download_size: 11004105
dataset_size: 11845435
- config_name: RedPajama-INCITE-7B-Base
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 789074
num_examples: 5010
- name: bg
num_bytes: 870916
num_examples: 5010
- name: de
num_bytes: 845436
num_examples: 5010
- name: el
num_bytes: 850780
num_examples: 5010
- name: es
num_bytes: 875677
num_examples: 5010
- name: fr
num_bytes: 880989
num_examples: 5010
- name: hi
num_bytes: 751526
num_examples: 5010
- name: ru
num_bytes: 881090
num_examples: 5010
- name: sw
num_bytes: 746100
num_examples: 5010
- name: th
num_bytes: 685496
num_examples: 5010
- name: tr
num_bytes: 770359
num_examples: 5010
- name: ur
num_bytes: 708810
num_examples: 5010
- name: vi
num_bytes: 735197
num_examples: 5010
- name: zh
num_bytes: 848461
num_examples: 5010
download_size: 10398581
dataset_size: 11239911
- config_name: llama-30B
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 860301
num_examples: 5010
- name: bg
num_bytes: 863946
num_examples: 5010
- name: de
num_bytes: 858009
num_examples: 5010
- name: el
num_bytes: 874347
num_examples: 5010
- name: es
num_bytes: 875007
num_examples: 5010
- name: fr
num_bytes: 884764
num_examples: 5010
- name: hi
num_bytes: 846950
num_examples: 5010
- name: ru
num_bytes: 869708
num_examples: 5010
- name: sw
num_bytes: 857197
num_examples: 5010
- name: th
num_bytes: 847402
num_examples: 5010
- name: tr
num_bytes: 825879
num_examples: 5010
- name: ur
num_bytes: 860074
num_examples: 5010
- name: vi
num_bytes: 862456
num_examples: 5010
- name: zh
num_bytes: 849263
num_examples: 5010
download_size: 11193973
dataset_size: 12035303
- config_name: open_llama_3b
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 705142
num_examples: 5010
- name: bg
num_bytes: 875604
num_examples: 5010
- name: de
num_bytes: 851525
num_examples: 5010
- name: el
num_bytes: 739635
num_examples: 5010
- name: es
num_bytes: 866291
num_examples: 5010
- name: fr
num_bytes: 880556
num_examples: 5010
- name: hi
num_bytes: 392659
num_examples: 5010
- name: ru
num_bytes: 876933
num_examples: 5010
- name: sw
num_bytes: 738299
num_examples: 5010
- name: th
num_bytes: 1273724
num_examples: 5010
- name: tr
num_bytes: 769184
num_examples: 5010
- name: ur
num_bytes: 739162
num_examples: 5010
- name: vi
num_bytes: 701661
num_examples: 5010
- name: zh
num_bytes: 878129
num_examples: 5010
download_size: 10447174
dataset_size: 11288504
- config_name: open_llama_7b
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 765568
num_examples: 5010
- name: bg
num_bytes: 860978
num_examples: 5010
- name: de
num_bytes: 839878
num_examples: 5010
- name: el
num_bytes: 790038
num_examples: 5010
- name: es
num_bytes: 862624
num_examples: 5010
- name: fr
num_bytes: 871243
num_examples: 5010
- name: hi
num_bytes: 328421
num_examples: 5010
- name: ru
num_bytes: 867424
num_examples: 5010
- name: sw
num_bytes: 784318
num_examples: 5010
- name: th
num_bytes: 1133537
num_examples: 5010
- name: tr
num_bytes: 770420
num_examples: 5010
- name: ur
num_bytes: 739842
num_examples: 5010
- name: vi
num_bytes: 767095
num_examples: 5010
- name: zh
num_bytes: 840369
num_examples: 5010
download_size: 10380425
dataset_size: 11221755
- config_name: open_llama_13b
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 855506
num_examples: 5010
- name: bg
num_bytes: 860868
num_examples: 5010
- name: de
num_bytes: 845896
num_examples: 5010
- name: el
num_bytes: 789495
num_examples: 5010
- name: es
num_bytes: 874595
num_examples: 5010
- name: fr
num_bytes: 883531
num_examples: 5010
- name: hi
num_bytes: 349430
num_examples: 5010
- name: ru
num_bytes: 860441
num_examples: 5010
- name: sw
num_bytes: 819611
num_examples: 5010
- name: th
num_bytes: 1249012
num_examples: 5010
- name: tr
num_bytes: 813974
num_examples: 5010
- name: ur
num_bytes: 775914
num_examples: 5010
- name: vi
num_bytes: 826589
num_examples: 5010
- name: zh
num_bytes: 828483
num_examples: 5010
download_size: 10792015
dataset_size: 11633345
- config_name: xgen-7b-4k-base
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 815916
num_examples: 5010
- name: bg
num_bytes: 866698
num_examples: 5010
- name: de
num_bytes: 845296
num_examples: 5010
- name: el
num_bytes: 873279
num_examples: 5010
- name: es
num_bytes: 867614
num_examples: 5010
- name: fr
num_bytes: 878177
num_examples: 5010
- name: hi
num_bytes: 795679
num_examples: 5010
- name: ru
num_bytes: 870241
num_examples: 5010
- name: sw
num_bytes: 815925
num_examples: 5010
- name: th
num_bytes: 680865
num_examples: 5010
- name: tr
num_bytes: 808508
num_examples: 5010
- name: ur
num_bytes: 755658
num_examples: 5010
- name: vi
num_bytes: 798616
num_examples: 5010
- name: zh
num_bytes: 839810
num_examples: 5010
download_size: 10670952
dataset_size: 11512282
- config_name: xgen-7b-8k-base
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 822039
num_examples: 5010
- name: bg
num_bytes: 866105
num_examples: 5010
- name: de
num_bytes: 834487
num_examples: 5010
- name: el
num_bytes: 871714
num_examples: 5010
- name: es
num_bytes: 863765
num_examples: 5010
- name: fr
num_bytes: 874570
num_examples: 5010
- name: hi
num_bytes: 811916
num_examples: 5010
- name: ru
num_bytes: 863980
num_examples: 5010
- name: sw
num_bytes: 801837
num_examples: 5010
- name: th
num_bytes: 773394
num_examples: 5010
- name: tr
num_bytes: 812359
num_examples: 5010
- name: ur
num_bytes: 762615
num_examples: 5010
- name: vi
num_bytes: 845558
num_examples: 5010
- name: zh
num_bytes: 840984
num_examples: 5010
download_size: 10803993
dataset_size: 11645323
- config_name: xgen-7b-8k-inst
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 852293
num_examples: 5010
- name: bg
num_bytes: 877290
num_examples: 5010
- name: de
num_bytes: 843890
num_examples: 5010
- name: el
num_bytes: 900388
num_examples: 5010
- name: es
num_bytes: 871938
num_examples: 5010
- name: fr
num_bytes: 883776
num_examples: 5010
- name: hi
num_bytes: 819611
num_examples: 5010
- name: ru
num_bytes: 871868
num_examples: 5010
- name: sw
num_bytes: 903297
num_examples: 5010
- name: th
num_bytes: 781456
num_examples: 5010
- name: tr
num_bytes: 888386
num_examples: 5010
- name: ur
num_bytes: 835512
num_examples: 5010
- name: vi
num_bytes: 881933
num_examples: 5010
- name: zh
num_bytes: 886819
num_examples: 5010
download_size: 11257127
dataset_size: 12098457
- config_name: open_llama_7b_v2
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 799618
num_examples: 5010
- name: bg
num_bytes: 864517
num_examples: 5010
- name: de
num_bytes: 844605
num_examples: 5010
- name: el
num_bytes: 867881
num_examples: 5010
- name: es
num_bytes: 872871
num_examples: 5010
- name: fr
num_bytes: 883623
num_examples: 5010
- name: hi
num_bytes: 821085
num_examples: 5010
- name: ru
num_bytes: 875313
num_examples: 5010
- name: sw
num_bytes: 810855
num_examples: 5010
- name: th
num_bytes: 756931
num_examples: 5010
- name: tr
num_bytes: 832938
num_examples: 5010
- name: ur
num_bytes: 776355
num_examples: 5010
- name: vi
num_bytes: 841205
num_examples: 5010
- name: zh
num_bytes: 836994
num_examples: 5010
download_size: 10843461
dataset_size: 11684791
- config_name: polylm-1.7b
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 840312
num_examples: 5010
- name: bg
num_bytes: 766907
num_examples: 5010
- name: de
num_bytes: 846775
num_examples: 5010
- name: el
num_bytes: 985392
num_examples: 5010
- name: es
num_bytes: 850661
num_examples: 5010
- name: fr
num_bytes: 872488
num_examples: 5010
- name: hi
num_bytes: 947295
num_examples: 5010
- name: ru
num_bytes: 823812
num_examples: 5010
- name: sw
num_bytes: 639344
num_examples: 5010
- name: th
num_bytes: 873714
num_examples: 5010
- name: tr
num_bytes: 882916
num_examples: 5010
- name: ur
num_bytes: 707398
num_examples: 5010
- name: vi
num_bytes: 837592
num_examples: 5010
- name: zh
num_bytes: 811983
num_examples: 5010
download_size: 10845259
dataset_size: 11686589
- config_name: polylm-13b
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 856622
num_examples: 5010
- name: bg
num_bytes: 872936
num_examples: 5010
- name: de
num_bytes: 853814
num_examples: 5010
- name: el
num_bytes: 792171
num_examples: 5010
- name: es
num_bytes: 867823
num_examples: 5010
- name: fr
num_bytes: 876800
num_examples: 5010
- name: hi
num_bytes: 825863
num_examples: 5010
- name: ru
num_bytes: 876390
num_examples: 5010
- name: sw
num_bytes: 659651
num_examples: 5010
- name: th
num_bytes: 848574
num_examples: 5010
- name: tr
num_bytes: 801914
num_examples: 5010
- name: ur
num_bytes: 750495
num_examples: 5010
- name: vi
num_bytes: 847699
num_examples: 5010
- name: zh
num_bytes: 823542
num_examples: 5010
download_size: 10712964
dataset_size: 11554294
- config_name: polylm-multialpaca-13b
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 832229
num_examples: 5010
- name: bg
num_bytes: 873130
num_examples: 5010
- name: de
num_bytes: 846302
num_examples: 5010
- name: el
num_bytes: 846617
num_examples: 5010
- name: es
num_bytes: 861183
num_examples: 5010
- name: fr
num_bytes: 863929
num_examples: 5010
- name: hi
num_bytes: 938018
num_examples: 5010
- name: ru
num_bytes: 866081
num_examples: 5010
- name: sw
num_bytes: 802054
num_examples: 5010
- name: th
num_bytes: 836126
num_examples: 5010
- name: tr
num_bytes: 799768
num_examples: 5010
- name: ur
num_bytes: 909124
num_examples: 5010
- name: vi
num_bytes: 842588
num_examples: 5010
- name: zh
num_bytes: 823529
num_examples: 5010
download_size: 11099348
dataset_size: 11940678
- config_name: open_llama_3b_v2
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 692849
num_examples: 5010
- name: bg
num_bytes: 852675
num_examples: 5010
- name: de
num_bytes: 835619
num_examples: 5010
- name: el
num_bytes: 834201
num_examples: 5010
- name: es
num_bytes: 873160
num_examples: 5010
- name: fr
num_bytes: 881098
num_examples: 5010
- name: hi
num_bytes: 726395
num_examples: 5010
- name: ru
num_bytes: 853657
num_examples: 5010
- name: sw
num_bytes: 690930
num_examples: 5010
- name: th
num_bytes: 724712
num_examples: 5010
- name: tr
num_bytes: 755625
num_examples: 5010
- name: ur
num_bytes: 753648
num_examples: 5010
- name: vi
num_bytes: 795981
num_examples: 5010
- name: zh
num_bytes: 844200
num_examples: 5010
download_size: 10273420
dataset_size: 11114750
- config_name: Llama-2-7b-hf
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 833964
num_examples: 5010
- name: bg
num_bytes: 867408
num_examples: 5010
- name: de
num_bytes: 852305
num_examples: 5010
- name: el
num_bytes: 859363
num_examples: 5010
- name: es
num_bytes: 880162
num_examples: 5010
- name: fr
num_bytes: 886400
num_examples: 5010
- name: hi
num_bytes: 802665
num_examples: 5010
- name: ru
num_bytes: 868568
num_examples: 5010
- name: sw
num_bytes: 775118
num_examples: 5010
- name: th
num_bytes: 774722
num_examples: 5010
- name: tr
num_bytes: 810268
num_examples: 5010
- name: ur
num_bytes: 786428
num_examples: 5010
- name: vi
num_bytes: 841904
num_examples: 5010
- name: zh
num_bytes: 837126
num_examples: 5010
download_size: 10835071
dataset_size: 11676401
- config_name: Llama-2-13b-hf
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 838926
num_examples: 5010
- name: bg
num_bytes: 864619
num_examples: 5010
- name: de
num_bytes: 847106
num_examples: 5010
- name: el
num_bytes: 858400
num_examples: 5010
- name: es
num_bytes: 873274
num_examples: 5010
- name: fr
num_bytes: 878414
num_examples: 5010
- name: hi
num_bytes: 819446
num_examples: 5010
- name: ru
num_bytes: 864307
num_examples: 5010
- name: sw
num_bytes: 821998
num_examples: 5010
- name: th
num_bytes: 812673
num_examples: 5010
- name: tr
num_bytes: 812102
num_examples: 5010
- name: ur
num_bytes: 831111
num_examples: 5010
- name: vi
num_bytes: 838971
num_examples: 5010
- name: zh
num_bytes: 835539
num_examples: 5010
download_size: 10955556
dataset_size: 11796886
- config_name: Llama-2-7b-chat-hf
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 948578
num_examples: 5010
- name: bg
num_bytes: 776309
num_examples: 5010
- name: de
num_bytes: 725534
num_examples: 5010
- name: el
num_bytes: 956805
num_examples: 5010
- name: es
num_bytes: 631915
num_examples: 5010
- name: fr
num_bytes: 534372
num_examples: 5010
- name: hi
num_bytes: 960220
num_examples: 5010
- name: ru
num_bytes: 535448
num_examples: 5010
- name: sw
num_bytes: 1001740
num_examples: 5010
- name: th
num_bytes: 995206
num_examples: 5010
- name: tr
num_bytes: 865992
num_examples: 5010
- name: ur
num_bytes: 864017
num_examples: 5010
- name: vi
num_bytes: 246890
num_examples: 5010
- name: zh
num_bytes: 538232
num_examples: 5010
download_size: 9739928
dataset_size: 10581258
- config_name: Llama-2-13b-chat-hf
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: ar
num_bytes: 932439
num_examples: 5010
- name: bg
num_bytes: 877857
num_examples: 5010
- name: de
num_bytes: 859893
num_examples: 5010
- name: el
num_bytes: 910487
num_examples: 5010
- name: es
num_bytes: 872553
num_examples: 5010
- name: fr
num_bytes: 879291
num_examples: 5010
- name: hi
num_bytes: 987002
num_examples: 5010
- name: ru
num_bytes: 887918
num_examples: 5010
- name: sw
num_bytes: 1021074
num_examples: 5010
- name: th
num_bytes: 1054387
num_examples: 5010
- name: tr
num_bytes: 900761
num_examples: 5010
- name: ur
num_bytes: 1099374
num_examples: 5010
- name: vi
num_bytes: 884472
num_examples: 5010
- name: zh
num_bytes: 882394
num_examples: 5010
download_size: 12208572
dataset_size: 13049902
---
# Dataset Card for "xnli"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://www.nyu.edu/projects/bowman/xnli/](https://www.nyu.edu/projects/bowman/xnli/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 7.74 GB
- **Size of the generated dataset:** 3.23 GB
- **Total amount of disk used:** 10.97 GB
### Dataset Summary
XNLI is a subset of a few thousand examples from MNLI which has been translated
into a 14 different languages (some low-ish resource). As with MNLI, the goal is
to predict textual entailment (does sentence A imply/contradict/neither sentence
B) and is a classification task (given two sentences, predict one of three
labels).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### all_languages
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 1.61 GB
- **Total amount of disk used:** 2.09 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"hypothesis": "{\"language\": [\"ar\", \"bg\", \"de\", \"el\", \"en\", \"es\", \"fr\", \"hi\", \"ru\", \"sw\", \"th\", \"tr\", \"ur\", \"vi\", \"zh\"], \"translation\": [\"احد اع...",
"label": 0,
"premise": "{\"ar\": \"واحدة من رقابنا ستقوم بتنفيذ تعليماتك كلها بكل دقة\", \"bg\": \"един от нашите номера ще ви даде инструкции .\", \"de\": \"Eine ..."
}
```
#### ar
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 109.32 MB
- **Total amount of disk used:** 593.29 MB
An example of 'validation' looks as follows.
```
{
"hypothesis": "اتصل بأمه حالما أوصلته حافلة المدرسية.",
"label": 1,
"premise": "وقال، ماما، لقد عدت للمنزل."
}
```
#### bg
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 128.32 MB
- **Total amount of disk used:** 612.28 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"hypothesis": "\"губиш нещата на следното ниво , ако хората си припомнят .\"...",
"label": 0,
"premise": "\"по време на сезона и предполагам , че на твоето ниво ще ги загубиш на следващото ниво , ако те решат да си припомнят отбора на ..."
}
```
#### de
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 86.17 MB
- **Total amount of disk used:** 570.14 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"hypothesis": "Man verliert die Dinge auf die folgende Ebene , wenn sich die Leute erinnern .",
"label": 0,
"premise": "\"Du weißt , während der Saison und ich schätze , auf deiner Ebene verlierst du sie auf die nächste Ebene , wenn sie sich entschl..."
}
```
#### el
- **Size of downloaded dataset files:** 483.96 MB
- **Size of the generated dataset:** 142.30 MB
- **Total amount of disk used:** 626.26 MB
An example of 'validation' looks as follows.
```
This example was too long and was cropped:
{
"hypothesis": "\"Τηλεφώνησε στη μαμά του μόλις το σχολικό λεωφορείο τον άφησε.\"...",
"label": 1,
"premise": "Και είπε, Μαμά, έφτασα στο σπίτι."
}
```
### Data Fields
The data fields are the same among all splits.
#### all_languages
- `premise`: a multilingual `string` variable, with possible languages including `ar`, `bg`, `de`, `el`, `en`.
- `hypothesis`: a multilingual `string` variable, with possible languages including `ar`, `bg`, `de`, `el`, `en`.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### ar
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### bg
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### de
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
#### el
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
### Data Splits
| name |train |validation|test|
|-------------|-----:|---------:|---:|
|all_languages|392702| 2490|5010|
|ar |392702| 2490|5010|
|bg |392702| 2490|5010|
|de |392702| 2490|5010|
|el |392702| 2490|5010|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{conneau2018xnli,
author = {Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin},
title = {XNLI: Evaluating Cross-lingual Sentence Representations},
booktitle = {Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing},
year = {2018},
publisher = {Association for Computational Linguistics},
location = {Brussels, Belgium},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
ccmusic-database/music_genre | 2023-10-10T15:53:38.000Z | [
"task_categories:audio-classification",
"size_categories:1K<n<10K",
"language:zh",
"language:en",
"license:mit",
"music",
"art",
"region:us"
] | ccmusic-database | This database contains about 1700 musical pieces (.mp3 format)
with lengths of 270-300s that are divided into 17 genres in total. | @dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {{Music Data Sharing Platform for Computational Musicology Research (CCMUSIC DATASET)}},
month = nov,
year = 2021,
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
} | null | 5 | 56 | ---
license: mit
task_categories:
- audio-classification
language:
- zh
- en
tags:
- music
- art
pretty_name: Music Genre Database
size_categories:
- 1K<n<10K
---
# Dataset Card for Music Genre Dataset
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/ccmusic-database/music_genre>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.io/team.html>
- **Point of Contact:** N/A
### Dataset Summary
This database contains about 1700 musical pieces (.mp3 format) with lengths of 270-300s that are divided into 17 genres in total.
### Supported Tasks and Leaderboards
Audio classification
### Languages
Multilingual
## Usage
When doing classification task, only one colum of fst_level_label, sec_level_label and thr_level_label can be used, not for mixing.
```
from datasets import load_dataset
dataset = load_dataset("ccmusic-database/music_genre", split="test")
for item in dataset:
print(item)
```
## Dataset Structure
### Data Instances
.zip(.png) .csv
### Data Fields
```
1_Classic
3_Symphony
4_Opera
5_Solo
6_Chamber
2_Non_classic
7_Pop
12_Pop_vocal_ballad
13_Adult_contemporary
14_Teen_pop
8_Dance_and_house
15_Contemporary_dance_pop
16_Dance_pop
9_Indie
17_Classic_indie_pop
18_Chamber_cabaret_and_art_pop
10_Soul_or_r_and_b
11_Rock
19_Adult_alternative_rock
20_Uplifting_anthemic_rock
21_Soft_rock
22_Acoustic_pop
```
### Data Splits
Train, validation, test
## Dataset Creation
### Curation Rationale
Promoting the development of AI in the music industry
### Source Data
#### Initial Data Collection and Normalization
Zhaorui Liu, Monan Zhou
#### Who are the source language producers?
Composers of the songs in dataset
### Annotations
#### Annotation process
Students collected about 1700 musical pieces (.mp3 format) with lengths of 270-300s divided into 17 genres in total.
#### Who are the annotators?
Students from CCMUSIC
### Personal and Sensitive Information
Due to copyright issues with the original music, only mel spectrograms are provided in the dataset.
## Considerations for Using the Data
### Social Impact of Dataset
Promoting the development of AI in the music industry
### Discussion of Biases
Most are English songs
### Other Known Limitations
Samples are not balanced enough
## Additional Information
### Dataset Curators
Zijin Li
### Evaluation
Coming soon...
### Licensing Information
```
MIT License
Copyright (c) CCMUSIC
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
### Citation Information
```
@dataset{zhaorui_liu_2021_5676893,
author = {Zhaorui Liu, Monan Zhou, Shenyang Xu and Zijin Li},
title = {CCMUSIC DATABASE: Music Data Sharing Platform for Computational Musicology Research},
month = {nov},
year = {2021},
publisher = {Zenodo},
version = {1.1},
doi = {10.5281/zenodo.5676893},
url = {https://doi.org/10.5281/zenodo.5676893}
}
```
### Contributions
Provide a dataset for music genre classification |
Docugami/dfm-csl-small-benchmark | 2023-10-04T08:44:17.000Z | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:mit",
"docugami",
"dfm-csl",
"xml-knowledge-graphs",
"region:us"
] | Docugami | null | null | null | 4 | 56 | ---
license: mit
language:
- en
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text2text-generation
- text-generation
dataset_info:
features:
- name: Text
dtype: string
- name: Small Chunk
dtype: string
- name: Ground Truth
dtype: string
- name: docugami/dfm-cs-small
dtype: string
splits:
- name: eval
num_bytes: 240040
num_examples: 1099
- name: train
num_bytes: 20906
num_examples: 100
download_size: 143986
dataset_size: 260946
tags:
- docugami
- dfm-csl
- xml-knowledge-graphs
pretty_name: Contextual Semantic Lables (Small)
---
# Contextual Semantic Labels (Small) Benchmark Dataset
Please see [https://github.com/docugami/DFM-benchmarks](https://github.com/docugami/DFM-benchmarks) for more details, eval code, and current scores for different models.
# Using Dataset
Please refer to standard huggingface documentation to use this dataset: [https://huggingface.co/docs/datasets/index](https://huggingface.co/docs/datasets/index)
The [explore.ipynb](./explore.ipynb) notebook has some reference code. |
clarin-knext/scifact-pl | 2023-06-07T10:07:12.000Z | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | null | 0 | 56 | ---
language:
- pl
pretty_name: BEIR-PL benchmark Scifact-PL
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl
|
clarin-knext/nq-pl | 2023-09-07T12:25:03.000Z | [
"language:pl",
"license:cc-by-4.0",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | null | 0 | 56 | ---
license: cc-by-4.0
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl |
clarin-knext/nfcorpus-pl | 2023-06-07T08:11:26.000Z | [
"language:pl",
"arxiv:2305.19840",
"region:us"
] | clarin-knext | null | null | null | 0 | 56 | ---
language:
- pl
---
Part of **BEIR-PL: Zero Shot Information Retrieval Benchmark for the Polish Language**.
Link to arxiv: https://arxiv.org/pdf/2305.19840.pdf
Contact: konrad.wojtasik@pwr.edu.pl |
ashnrk/cifar10_lt_r10_text | 2023-06-14T06:10:37.000Z | [
"region:us"
] | ashnrk | null | null | null | 0 | 56 | ---
dataset_info:
features:
- name: img
dtype: image
- name: label
dtype:
class_label:
names:
'0': airplane
'1': automobile
'2': bird
'3': cat
'4': deer
'5': dog
'6': frog
'7': horse
'8': ship
'9': truck
- name: text_label
dtype: string
splits:
- name: train
num_bytes: 9133039.5
num_examples: 4084
download_size: 9126904
dataset_size: 9133039.5
---
# Dataset Card for "cifar10_lt_r10_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
haitengzhao/molecule_property_instruction | 2023-07-13T10:30:29.000Z | [
"task_categories:question-answering",
"language:en",
"license:afl-3.0",
"chemistry",
"biology",
"region:us"
] | haitengzhao | null | null | null | 3 | 56 | ---
dataset_info:
features:
- name: graph
dtype: string
- name: text
sequence: string
- name: label
dtype: string
- name: dataset_name
dtype: string
- name: task_index
dtype: string
- name: molecule_index
dtype: string
- name: split
dtype: string
splits:
- name: esol
num_bytes: 542831
num_examples: 1128
- name: lipo
num_bytes: 1519836
num_examples: 4200
- name: freesolv
num_bytes: 527615
num_examples: 642
- name: bace
num_bytes: 5103112
num_examples: 1513
- name: hiv
num_bytes: 215094514
num_examples: 41127
- name: muv
num_bytes: 594798639
num_examples: 249886
- name: tox21
num_bytes: 121153396
num_examples: 77946
- name: toxcast
num_bytes: 1543462519
num_examples: 1490412
- name: bbbp
num_bytes: 2521597
num_examples: 2039
- name: cyp450
num_bytes: 30602477
num_examples: 53178
- name: chembl_zero_shot
num_bytes: 89499667
num_examples: 180229
- name: chembl_pretraining
num_bytes: 12246285194
num_examples: 23874346
- name: pcba
num_bytes: 21761726609
num_examples: 34017170
download_size: 2163300521
dataset_size: 36612838006
license: afl-3.0
task_categories:
- question-answering
language:
- en
tags:
- chemistry
- biology
pretty_name: p
---
# Dataset Card for "molecule_property_instruction"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
WelfCrozzo/kupalinka-lm | 2023-09-03T12:53:22.000Z | [
"region:us"
] | WelfCrozzo | null | null | null | 0 | 56 | ---
dataset_info:
features:
- name: x
dtype: string
- name: lang
dtype: string
splits:
- name: train
num_bytes: 948184826
num_examples: 685819
- name: validation
num_bytes: 105052717
num_examples: 75850
download_size: 573680264
dataset_size: 1053237543
---
# Dataset Card for "kupalinka-lm"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
C-MTEB/IFlyTek-classification | 2023-07-28T13:30:24.000Z | [
"region:us"
] | C-MTEB | null | null | null | 1 | 56 | ---
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
'5': '5'
'6': '6'
'7': '7'
'8': '8'
'9': '9'
'10': '10'
'11': '11'
'12': '12'
'13': '13'
'14': '14'
'15': '15'
'16': '16'
'17': '17'
'18': '18'
'19': '19'
'20': '20'
'21': '21'
'22': '22'
'23': '23'
'24': '24'
'25': '25'
'26': '26'
'27': '27'
'28': '28'
'29': '29'
'30': '30'
'31': '31'
'32': '32'
'33': '33'
'34': '34'
'35': '35'
'36': '36'
'37': '37'
'38': '38'
'39': '39'
'40': '40'
'41': '41'
'42': '42'
'43': '43'
'44': '44'
'45': '45'
'46': '46'
'47': '47'
'48': '48'
'49': '49'
'50': '50'
'51': '51'
'52': '52'
'53': '53'
'54': '54'
'55': '55'
'56': '56'
'57': '57'
'58': '58'
'59': '59'
'60': '60'
'61': '61'
'62': '62'
'63': '63'
'64': '64'
'65': '65'
'66': '66'
'67': '67'
'68': '68'
'69': '69'
'70': '70'
'71': '71'
'72': '72'
'73': '73'
'74': '74'
'75': '75'
'76': '76'
'77': '77'
'78': '78'
'79': '79'
'80': '80'
'81': '81'
'82': '82'
'83': '83'
'84': '84'
'85': '85'
'86': '86'
'87': '87'
'88': '88'
'89': '89'
'90': '90'
'91': '91'
'92': '92'
'93': '93'
'94': '94'
'95': '95'
'96': '96'
'97': '97'
'98': '98'
'99': '99'
'100': '100'
'101': '101'
'102': '102'
'103': '103'
'104': '104'
'105': '105'
'106': '106'
'107': '107'
'108': '108'
'109': '109'
'110': '110'
'111': '111'
'112': '112'
'113': '113'
'114': '114'
'115': '115'
'116': '116'
'117': '117'
'118': '118'
- name: idx
dtype: int32
splits:
- name: test
num_bytes: 2105684
num_examples: 2600
- name: train
num_bytes: 10028605
num_examples: 12133
- name: validation
num_bytes: 2157119
num_examples: 2599
download_size: 9777643
dataset_size: 14291408
---
# Dataset Card for "IFlyTek-classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
HuggingFaceM4/LLaVAR-Instruct-16K | 2023-07-28T15:49:07.000Z | [
"region:us"
] | HuggingFaceM4 | null | null | null | 3 | 56 | ---
dataset_info:
features:
- name: image
dtype: image
- name: user_texts
sequence: string
- name: bot_texts
sequence: string
splits:
- name: train
num_bytes: 433689449.5
num_examples: 15500
download_size: 487607994
dataset_size: 433689449.5
---
# Dataset Card for "LLaVAR-Instruct-16K"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
natmin322/3k_vietnamese_voice_augmented | 2023-08-12T09:14:22.000Z | [
"region:us"
] | natmin322 | null | null | null | 0 | 56 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 886300388.18
num_examples: 3005
download_size: 896990533
dataset_size: 886300388.18
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "3k_vietnamese_voice_augmented"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
TrevorJS/mtg-rules-qa | 2023-08-31T03:42:52.000Z | [
"task_categories:question-answering",
"language:en",
"region:us"
] | TrevorJS | null | null | null | 0 | 56 | ---
task_categories:
- question-answering
language:
- en
pretty_name: 'Magic: The Gathering Rules Q&A'
--- |
mirfan899/punjabi-ner | 2023-09-11T06:26:02.000Z | [
"license:other",
"region:us"
] | mirfan899 | null | null | null | 0 | 56 | ---
license: other
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': I-PER
'2': I-ORG
'3': I-LOC
'4': B-PER
'5': B-ORG
'6': B-LOC
splits:
- name: train
num_bytes: 4548073
num_examples: 12903
- name: validation
num_bytes: 1941857
num_examples: 5530
- name: test
num_bytes: 1941857
num_examples: 5530
download_size: 1914615
dataset_size: 8431787
---
|
prakhargupta94/recipe_llama | 2023-09-09T19:28:16.000Z | [
"region:us"
] | prakhargupta94 | null | null | null | 0 | 56 | Entry not found |
FanChen0116/bus_few4_16x_pvi | 2023-09-27T08:41:25.000Z | [
"region:us"
] | FanChen0116 | null | null | null | 0 | 56 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: tokens
sequence: string
- name: labels
sequence:
class_label:
names:
'0': O
'1': I-from_location
'2': B-from_location
'3': B-leaving_date
'4': I-leaving_date
'5': I-to_location
'6': B-to_location
- name: request_slot
sequence: string
splits:
- name: train
num_bytes: 138287
num_examples: 560
- name: validation
num_bytes: 6900
num_examples: 35
- name: test
num_bytes: 70618
num_examples: 377
download_size: 20540
dataset_size: 215805
---
# Dataset Card for "bus_few4_16x_pvi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/squad_no_id_train_10_eval_10 | 2023-09-19T15:55:06.000Z | [
"region:us"
] | tyzhu | null | null | null | 0 | 56 | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
- name: context_id
dtype: string
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 237881
num_examples: 150
- name: validation
num_bytes: 58313
num_examples: 48
download_size: 72461
dataset_size: 296194
---
# Dataset Card for "squad_no_id_train_10_eval_10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
zxvix/pubmed_subset_c4_20p | 2023-09-20T03:56:15.000Z | [
"region:us"
] | zxvix | null | null | null | 0 | 56 | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 2372378215.5730233
num_examples: 1250378
- name: test
num_bytes: 1024229
num_examples: 1000
download_size: 909276640
dataset_size: 2373402444.5730233
---
# Dataset Card for "pubmed_subset_c4_20p"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
changjacHp/lol_champions_abilities | 2023-09-21T06:25:51.000Z | [
"region:us"
] | changjacHp | null | null | null | 0 | 56 | Entry not found |
LDJnr/LessWrong-Amplify-Instruct | 2023-09-26T02:34:28.000Z | [
"task_categories:conversational",
"task_categories:question-answering",
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"Physics",
"Biology",
"Math",
"Chemistry",
"Culture",
"Logic",
"region:us"
] | LDJnr | null | null | null | 9 | 56 | ---
license: apache-2.0
task_categories:
- conversational
- question-answering
- text-generation
language:
- en
tags:
- Physics
- Biology
- Math
- Chemistry
- Culture
- Logic
pretty_name: LessWrong-Amplify-Instruct
size_categories:
- n<1K
---
## This is the Official LessWrong-Amplify-Instruct dataset. Over 500 multi-turn examples, and many more coming soon!
- This leverages Amplify-Instruct method to extend thousands of scraped Less-Wrong posts into advanced in-depth multi-turn conversations.
- Comprised of over 500 highly filtered multi-turn conversations between GPT-4 and real humans.
- Average context length per conversation is over 2,000 tokens. (will measure this more accurately soon)
- Synthetically created using a newly developed pipeline that leverages GPT-4 to dynamically role play and inquire as the human and assistant.
- Each conversation is optimized to amplify the raw knowledge retreival of the model and delve deep into obscure and advanced topics.
## Purpose?
- This dataset is not intended to be trained on by itself, however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such!
## Quality filtering and cleaning.
- Extensive cleaning was done to filter out instances of overt AI moralizing or related behaviour, such as "As an AI language model" and "September 2021"
## Credits
During the curation process, there can be some relatively arduos steps when it comes to actually executing on the best experimentation or concepts for how to filter examples out.
Luckily there is folks over at NousResearch that helped expedite this process with little to no sacrifices in quality, big thank you to J-Supha specifically for making these types of significant contributions.
## Future Plans & How you can help!
This is a relatively early build amongst the grand plans for the future of what I plan to work on!
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets.
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.