id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
757,317,651
1,134
adding xquad-r dataset
closed
https://github.com/huggingface/datasets/pull/1134
2020-12-04T18:39:13
2020-12-05T16:50:47
2020-12-05T16:50:47
{ "login": "manandey", "id": 6687858, "type": "User" }
[]
true
[]
757,307,660
1,133
Adding XQUAD-R Dataset
closed
https://github.com/huggingface/datasets/pull/1133
2020-12-04T18:22:29
2020-12-04T18:28:54
2020-12-04T18:28:49
{ "login": "manandey", "id": 6687858, "type": "User" }
[]
true
[]
757,301,368
1,132
Add Urdu Sentiment Corpus (USC).
Added Urdu Sentiment Corpus. More details about the dataset over <a href="https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus">here</a>.
closed
https://github.com/huggingface/datasets/pull/1132
2020-12-04T18:12:24
2020-12-04T20:52:48
2020-12-04T20:52:48
{ "login": "chaitnayabasava", "id": 44389205, "type": "User" }
[]
true
[]
757,278,341
1,131
Adding XQUAD-R Dataset
closed
https://github.com/huggingface/datasets/pull/1131
2020-12-04T17:35:43
2020-12-04T18:27:22
2020-12-04T18:27:22
{ "login": "manandey", "id": 6687858, "type": "User" }
[]
true
[]
757,265,075
1,130
adding discovery
closed
https://github.com/huggingface/datasets/pull/1130
2020-12-04T17:16:54
2020-12-14T13:03:14
2020-12-14T13:03:14
{ "login": "sileod", "id": 9168444, "type": "User" }
[]
true
[]
757,255,492
1,129
Adding initial version of cord-19 dataset
Initial version only reading the metadata in CSV. ### Checklist: - [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _infos(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class. - [x] Generate the metadata file dataset_infos.json for all configurations - [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card README.md using the template and at least fill the tags - [x] Both tests for the real data and the dummy data pass. ### TODO: - [x] add more metadata - [x] add full text - [x] add pre-computed document embedding
closed
https://github.com/huggingface/datasets/pull/1129
2020-12-04T17:03:17
2021-02-09T10:22:35
2021-02-09T10:18:06
{ "login": "ggdupont", "id": 5583410, "type": "User" }
[]
true
[]
757,245,404
1,128
Add xquad-r dataset
closed
https://github.com/huggingface/datasets/pull/1128
2020-12-04T16:48:53
2020-12-04T18:14:30
2020-12-04T18:14:26
{ "login": "manandey", "id": 6687858, "type": "User" }
[]
true
[]
757,229,684
1,127
Add wikiqaar dataset
Arabic Wiki Question Answering Corpus.
closed
https://github.com/huggingface/datasets/pull/1127
2020-12-04T16:26:18
2020-12-07T16:39:41
2020-12-07T16:39:41
{ "login": "zaidalyafeai", "id": 15667714, "type": "User" }
[]
true
[]
757,197,735
1,126
Adding babi dataset
Adding the English version of bAbI. Samples are taken from ParlAI for consistency with the main users at the moment. Supersede #945 (problem with the rebase) and adresses the issues mentioned in the review (dummy data are smaller now and code comments are fixed).
closed
https://github.com/huggingface/datasets/pull/1126
2020-12-04T15:42:34
2021-03-30T09:44:04
2021-03-30T09:44:04
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
757,194,531
1,125
Add Urdu fake news dataset.
Added Urdu fake news dataset. More information about the dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
closed
https://github.com/huggingface/datasets/pull/1125
2020-12-04T15:38:17
2020-12-07T03:21:05
2020-12-07T03:21:05
{ "login": "chaitnayabasava", "id": 44389205, "type": "User" }
[]
true
[]
757,186,983
1,124
Add Xitsonga Ner
Clean Xitsonga Ner PR
closed
https://github.com/huggingface/datasets/pull/1124
2020-12-04T15:27:44
2020-12-06T18:31:35
2020-12-06T18:31:35
{ "login": "yvonnegitau", "id": 7923902, "type": "User" }
[]
true
[]
757,181,014
1,123
adding cdt dataset
closed
https://github.com/huggingface/datasets/pull/1123
2020-12-04T15:19:36
2020-12-04T17:05:56
2020-12-04T17:05:56
{ "login": "abecadel", "id": 1654113, "type": "User" }
[]
true
[]
757,176,172
1,122
Add Urdu fake news.
Added Urdu fake news dataset. More information about the dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
closed
https://github.com/huggingface/datasets/pull/1122
2020-12-04T15:13:10
2020-12-04T15:20:07
2020-12-04T15:20:07
{ "login": "chaitnayabasava", "id": 44389205, "type": "User" }
[]
true
[]
757,169,944
1,121
adding cdt dataset
closed
https://github.com/huggingface/datasets/pull/1121
2020-12-04T15:04:33
2020-12-04T15:16:49
2020-12-04T15:16:49
{ "login": "abecadel", "id": 1654113, "type": "User" }
[]
true
[]
757,166,342
1,120
Add conda environment activation
Added activation of Conda environment before installing.
closed
https://github.com/huggingface/datasets/pull/1120
2020-12-04T14:59:43
2020-12-04T18:34:48
2020-12-04T16:40:57
{ "login": "parmarsuraj99", "id": 9317265, "type": "User" }
[]
true
[]
757,156,781
1,119
Add Google Great Code Dataset
closed
https://github.com/huggingface/datasets/pull/1119
2020-12-04T14:46:28
2020-12-06T17:33:14
2020-12-06T17:33:13
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
757,142,350
1,118
Add Tashkeela dataset
Arabic Vocalized Words Dataset.
closed
https://github.com/huggingface/datasets/pull/1118
2020-12-04T14:26:18
2020-12-04T15:47:01
2020-12-04T15:46:51
{ "login": "zaidalyafeai", "id": 15667714, "type": "User" }
[]
true
[]
757,133,789
1,117
Fix incorrect MRQA train+SQuAD URL
Fix issue #1115
closed
https://github.com/huggingface/datasets/pull/1117
2020-12-04T14:14:26
2020-12-06T17:14:11
2020-12-06T17:14:10
{ "login": "yuxiang-wu", "id": 6259768, "type": "User" }
[]
true
[]
757,133,502
1,116
add dbpedia_14 dataset
This dataset corresponds to the DBpedia dataset requested in https://github.com/huggingface/datasets/issues/353.
closed
https://github.com/huggingface/datasets/pull/1116
2020-12-04T14:13:59
2020-12-07T10:06:54
2020-12-05T15:36:23
{ "login": "hfawaz", "id": 29229602, "type": "User" }
[]
true
[]
757,127,527
1,115
Incorrect URL for MRQA SQuAD train subset
https://github.com/huggingface/datasets/blob/4ef4c8f8b7a60e35c6fa21115fca9faae91c9f74/datasets/mrqa/mrqa.py#L53 The URL for `train+SQuAD` subset of MRQA points to the dev set instead of train set. It should be `https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/SQuAD.jsonl.gz`.
closed
https://github.com/huggingface/datasets/issues/1115
2020-12-04T14:05:24
2020-12-06T17:14:22
2020-12-06T17:14:22
{ "login": "yuxiang-wu", "id": 6259768, "type": "User" }
[]
false
[]
757,123,638
1,114
Add sesotho ner corpus
Clean Sesotho PR
closed
https://github.com/huggingface/datasets/pull/1114
2020-12-04T13:59:41
2020-12-04T15:02:07
2020-12-04T15:02:07
{ "login": "yvonnegitau", "id": 7923902, "type": "User" }
[]
true
[]
757,115,557
1,113
add qed
adding QED: Dataset for Explanations in Question Answering https://github.com/google-research-datasets/QED https://arxiv.org/abs/2009.06354
closed
https://github.com/huggingface/datasets/pull/1113
2020-12-04T13:47:57
2020-12-05T15:46:21
2020-12-05T15:41:57
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]
757,108,151
1,112
Initial version of cord-19 dataset from AllenAI with only the abstract
Initial version only reading the metadata in CSV. ### Checklist: - [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _infos(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class. - [x] Generate the metadata file dataset_infos.json for all configurations - [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card README.md using the template and at least fill the tags - [ ] Both tests for the real data and the dummy data pass. ### TODO: - [ ] add more metadata - [ ] add full text - [ ] add pre-computed document embedding
closed
https://github.com/huggingface/datasets/pull/1112
2020-12-04T13:36:39
2020-12-04T16:16:40
2020-12-04T16:16:24
{ "login": "ggdupont", "id": 5583410, "type": "User" }
[]
true
[]
757,083,266
1,111
Add Siswati Ner corpus
Clean Siswati PR
closed
https://github.com/huggingface/datasets/pull/1111
2020-12-04T12:57:31
2020-12-04T14:43:01
2020-12-04T14:43:00
{ "login": "yvonnegitau", "id": 7923902, "type": "User" }
[]
true
[]
757,082,677
1,110
Using a feature named "_type" fails with certain operations
A column named `_type` leads to a `TypeError: unhashable type: 'dict'` for certain operations: ```python from datasets import Dataset, concatenate_datasets ds = Dataset.from_dict({"_type": ["whatever"]}).map() concatenate_datasets([ds]) # or simply Dataset(ds._data) ``` Context: We are using datasets to persist data coming from elasticsearch to feed to our pipeline, and elasticsearch has a `_type` field, hence the strange name of the column. Not sure if you wish to support this specific column name, but if you do i would be happy to try a fix and provide a PR. I already had a look into it and i think the culprit is the `datasets.features.generate_from_dict` function. It uses the hard coded `_type` string to figure out if it reached the end of the nested feature object from a serialized dict. Best wishes and keep up the awesome work!
closed
https://github.com/huggingface/datasets/issues/1110
2020-12-04T12:56:33
2022-01-14T18:07:00
2022-01-14T18:07:00
{ "login": "dcfidalgo", "id": 15979778, "type": "User" }
[]
false
[]
757,055,702
1,109
add woz_dialogue
Adding Wizard-of-Oz task oriented dialogue dataset https://github.com/nmrksic/neural-belief-tracker/tree/master/data/woz https://arxiv.org/abs/1604.04562
closed
https://github.com/huggingface/datasets/pull/1109
2020-12-04T12:13:07
2020-12-05T15:41:23
2020-12-05T15:40:18
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]
757,054,732
1,108
Add Sepedi NER corpus
Finally a clean PR for Sepedi
closed
https://github.com/huggingface/datasets/pull/1108
2020-12-04T12:11:24
2020-12-04T14:39:00
2020-12-04T14:39:00
{ "login": "yvonnegitau", "id": 7923902, "type": "User" }
[]
true
[]
757,031,179
1,107
Add arsentd_lev dataset
Add The Arabic Sentiment Twitter Dataset for Levantine dialect (ArSenTD-LEV) Paper: [ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine Tweets](https://arxiv.org/abs/1906.01830) Homepage: http://oma-project.com/
closed
https://github.com/huggingface/datasets/pull/1107
2020-12-04T11:31:04
2020-12-05T15:38:09
2020-12-05T15:38:09
{ "login": "moussaKam", "id": 28675016, "type": "User" }
[]
true
[]
757,027,158
1,106
Add Urdu fake news
Added Urdu fake news dataset. More information about the dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
closed
https://github.com/huggingface/datasets/pull/1106
2020-12-04T11:24:14
2020-12-04T14:21:12
2020-12-04T14:21:12
{ "login": "chaitnayabasava", "id": 44389205, "type": "User" }
[]
true
[]
757,024,162
1,105
add xquad_r dataset
closed
https://github.com/huggingface/datasets/pull/1105
2020-12-04T11:19:35
2020-12-04T16:37:00
2020-12-04T16:37:00
{ "login": "manandey", "id": 6687858, "type": "User" }
[]
true
[]
757,020,934
1,104
add TLC
Added TLC dataset
closed
https://github.com/huggingface/datasets/pull/1104
2020-12-04T11:14:58
2020-12-04T14:29:23
2020-12-04T14:29:23
{ "login": "chameleonTK", "id": 6429850, "type": "User" }
[]
true
[]
757,016,820
1,103
Add support to download kaggle datasets
We can use API key
closed
https://github.com/huggingface/datasets/issues/1103
2020-12-04T11:08:37
2023-07-20T15:22:24
2023-07-20T15:22:23
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
757,016,515
1,102
Add retries to download manager
closed
https://github.com/huggingface/datasets/issues/1102
2020-12-04T11:08:11
2020-12-22T15:34:06
2020-12-22T15:34:06
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
757,009,226
1,101
Add Wikicorpus dataset
Add dataset.
closed
https://github.com/huggingface/datasets/pull/1101
2020-12-04T10:57:26
2020-12-09T18:13:10
2020-12-09T18:13:09
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
756,998,433
1,100
Urdu fake news
Added Bend the Truth urdu fake news dataset. More inforation <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
closed
https://github.com/huggingface/datasets/pull/1100
2020-12-04T10:41:20
2020-12-04T11:19:00
2020-12-04T11:19:00
{ "login": "chaitnayabasava", "id": 44389205, "type": "User" }
[]
true
[]
756,993,540
1,099
Add tamilmixsentiment data
closed
https://github.com/huggingface/datasets/pull/1099
2020-12-04T10:34:07
2020-12-06T06:32:22
2020-12-05T16:48:33
{ "login": "jamespaultg", "id": 7421838, "type": "User" }
[]
true
[]
756,975,414
1,098
Add ToTTo Dataset
Adds a brand new table to text dataset: https://github.com/google-research-datasets/ToTTo
closed
https://github.com/huggingface/datasets/pull/1098
2020-12-04T10:07:25
2020-12-04T13:38:20
2020-12-04T13:38:19
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
756,955,729
1,097
Add MSRA NER labels
Fixes #940
closed
https://github.com/huggingface/datasets/pull/1097
2020-12-04T09:38:16
2020-12-04T13:31:59
2020-12-04T13:31:58
{ "login": "JetRunner", "id": 22514219, "type": "User" }
[]
true
[]
756,952,461
1,096
FIX matinf link in ADD_NEW_DATASET.md
closed
https://github.com/huggingface/datasets/pull/1096
2020-12-04T09:33:25
2020-12-04T14:25:35
2020-12-04T14:25:35
{ "login": "moussaKam", "id": 28675016, "type": "User" }
[]
true
[]
756,934,964
1,095
Add TupleInf Open IE Dataset
For more information: https://allenai.org/data/tuple-ie
closed
https://github.com/huggingface/datasets/pull/1095
2020-12-04T09:08:07
2020-12-04T15:40:54
2020-12-04T15:40:54
{ "login": "mattbui", "id": 46804938, "type": "User" }
[]
true
[]
756,927,060
1,094
add urdu fake news dataset
Added Urdu fake news dataset. The dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
closed
https://github.com/huggingface/datasets/pull/1094
2020-12-04T08:57:38
2020-12-04T09:20:56
2020-12-04T09:20:56
{ "login": "chaitnayabasava", "id": 44389205, "type": "User" }
[]
true
[]
756,916,565
1,093
Add NCBI Disease Corpus dataset
closed
https://github.com/huggingface/datasets/pull/1093
2020-12-04T08:42:32
2020-12-04T11:15:12
2020-12-04T11:15:12
{ "login": "edugp", "id": 17855740, "type": "User" }
[]
true
[]
756,913,134
1,092
Add Coached Conversation Preference Dataset
Adding [Coached Conversation Preference Dataset](https://research.google/tools/datasets/coached-conversational-preference-elicitation/)
closed
https://github.com/huggingface/datasets/pull/1092
2020-12-04T08:36:49
2020-12-20T13:34:00
2020-12-04T13:49:50
{ "login": "vineeths96", "id": 50873201, "type": "User" }
[]
true
[]
756,841,254
1,091
Add Google wellformed query dataset
This pull request will add Google wellformed_query dataset. Link of dataset is https://github.com/google-research-datasets/query-wellformedness
closed
https://github.com/huggingface/datasets/pull/1091
2020-12-04T06:25:54
2020-12-06T17:43:03
2020-12-06T17:43:02
{ "login": "thevasudevgupta", "id": 53136577, "type": "User" }
[]
true
[]
756,825,941
1,090
add thaisum
ThaiSum, a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard. This dataset consists of over 350,000 article and summary pairs written by journalists. We evaluate the performance of various existing summarization models on ThaiSum dataset and analyse the characteristic of the dataset to present its difficulties.
closed
https://github.com/huggingface/datasets/pull/1090
2020-12-04T05:54:48
2020-12-04T11:16:06
2020-12-04T11:16:06
{ "login": "cstorm125", "id": 15519308, "type": "User" }
[]
true
[]
756,823,690
1,089
add sharc_modified
Adding modified ShARC dataset https://github.com/nikhilweee/neural-conv-qa
closed
https://github.com/huggingface/datasets/pull/1089
2020-12-04T05:49:49
2020-12-04T10:41:30
2020-12-04T10:31:44
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]
756,822,017
1,088
add xquad_r dataset
closed
https://github.com/huggingface/datasets/pull/1088
2020-12-04T05:45:55
2020-12-04T10:58:13
2020-12-04T10:47:01
{ "login": "manandey", "id": 6687858, "type": "User" }
[]
true
[]
756,794,430
1,087
Add Big Patent dataset
* More info on the dataset: https://evasharma.github.io/bigpatent/ * There's another raw version of the dataset available from tfds. However, they're quite large so I don't have the resources to fully test all the configs for that version yet. We'll try to add it later.
closed
https://github.com/huggingface/datasets/pull/1087
2020-12-04T04:37:30
2020-12-06T17:21:00
2020-12-06T17:20:59
{ "login": "mattbui", "id": 46804938, "type": "User" }
[]
true
[]
756,720,643
1,086
adding cdt dataset
- **Name:** *Cyberbullying Detection Task* - **Description:** *The Cyberbullying Detection task was part of 2019 edition of PolEval competition. The goal is to predict if a given Twitter message contains a cyberbullying (harmful) content.* - **Data:** *https://github.com/ptaszynski/cyberbullying-Polish* - **Motivation:** *The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding.*
closed
https://github.com/huggingface/datasets/pull/1086
2020-12-04T01:28:11
2020-12-04T15:04:02
2020-12-04T15:04:02
{ "login": "abecadel", "id": 1654113, "type": "User" }
[]
true
[]
756,704,563
1,085
add mutual friends conversational dataset
Mutual friends dataset WIP TODO: - scenario_kbs (bug with pyarrow conversion) - download from codalab checksums bug
closed
https://github.com/huggingface/datasets/pull/1085
2020-12-04T00:48:21
2020-12-16T15:58:31
2020-12-16T15:58:30
{ "login": "VictorSanh", "id": 16107619, "type": "User" }
[]
true
[]
756,688,727
1,084
adding cdsc dataset
- **Name**: *cdsc (domains: cdsc-e & cdsc-r)* - **Description**: *Polish CDSCorpus consists of 10K Polish sentence pairs which are human-annotated for semantic relatedness and entailment. The dataset may be used for the evaluation of compositional distributional semantics models of Polish. The dataset was presented at ACL 2017. Please refer to the Wróblewska and Krasnowska-Kieraś (2017) for a detailed description of the resource.* - **Data**: *http://2019.poleval.pl/index.php/tasks/* - **Motivation**: *The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding.*
closed
https://github.com/huggingface/datasets/pull/1084
2020-12-04T00:10:05
2020-12-04T10:41:26
2020-12-04T10:41:26
{ "login": "abecadel", "id": 1654113, "type": "User" }
[]
true
[]
756,687,101
1,083
Add the multilingual Exams dataset
https://github.com/mhardalov/exams-qa `multilingual` configs have all languages mixed together `crosslingual` mixes the languages for test but separates them for train and dec, so I've made one config per language for train/dev data and one config with the joint test set
closed
https://github.com/huggingface/datasets/pull/1083
2020-12-04T00:06:04
2020-12-04T17:12:00
2020-12-04T17:12:00
{ "login": "yjernite", "id": 10469459, "type": "User" }
[]
true
[]
756,676,218
1,082
Myanmar news dataset
Add news topic classification dataset in Myanmar / Burmese languagess This data was collected in 2017 by Aye Hninn Khine , and published on GitHub with a GPL license https://github.com/ayehninnkhine/MyanmarNewsClassificationSystem
closed
https://github.com/huggingface/datasets/pull/1082
2020-12-03T23:39:00
2020-12-04T10:13:38
2020-12-04T10:13:38
{ "login": "mapmeld", "id": 643918, "type": "User" }
[]
true
[]
756,672,527
1,081
Add Knowledge-Enhanced Language Model Pre-training (KELM)
Adds the KELM dataset. - Webpage/repo: https://github.com/google-research-datasets/KELM-corpus - Paper: https://arxiv.org/pdf/2010.12688.pdf
closed
https://github.com/huggingface/datasets/pull/1081
2020-12-03T23:30:09
2020-12-04T16:36:28
2020-12-04T16:36:28
{ "login": "joeddav", "id": 9353833, "type": "User" }
[]
true
[]
756,663,464
1,080
Add WikiANN NER dataset
This PR adds the full set of 176 languages from the balanced train/dev/test splits of WikiANN / PAN-X from: https://github.com/afshinrahimi/mmner Until now, only 40 of these languages were available in `datasets` as part of the XTREME benchmark Courtesy of the dataset author, we can now download this dataset from a Dropbox URL without needing a manual download anymore 🥳, so at some point it would be worth updating the PAN-X subset of XTREME as well 😄 Link to gist with some snippets for producing dummy data: https://gist.github.com/lewtun/5b93294ab6dbcf59d1493dbe2cfd6bb9 P.S. @yjernite I think I was confused about needing to generate a set of YAML tags per config, so ended up just adding a single one in the README.
closed
https://github.com/huggingface/datasets/pull/1080
2020-12-03T23:09:24
2020-12-06T17:18:55
2020-12-06T17:18:55
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
756,652,427
1,079
nkjp-ner
- **Name:** *nkjp-ner* - **Description:** *The NKJP-NER is based on a human-annotated part of NKJP. We extracted sentences with named entities of exactly one type. The task is to predict the type of the named entity.* - **Data:** *https://klejbenchmark.com/tasks/* - **Motivation:** *The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding.*
closed
https://github.com/huggingface/datasets/pull/1079
2020-12-03T22:47:26
2020-12-04T09:42:06
2020-12-04T09:42:06
{ "login": "abecadel", "id": 1654113, "type": "User" }
[]
true
[]
756,633,215
1,078
add AJGT dataset
Arabic Jordanian General Tweets.
closed
https://github.com/huggingface/datasets/pull/1078
2020-12-03T22:16:31
2020-12-04T09:55:15
2020-12-04T09:55:15
{ "login": "zaidalyafeai", "id": 15667714, "type": "User" }
[]
true
[]
756,617,964
1,077
Added glucose dataset
This PR adds the [Glucose](https://github.com/ElementalCognition/glucose) dataset.
closed
https://github.com/huggingface/datasets/pull/1077
2020-12-03T21:49:01
2020-12-04T09:55:53
2020-12-04T09:55:52
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[]
true
[]
756,584,328
1,076
quac quac / coin coin
Add QUAC (Question Answering in Context) I linearized most of the dictionnaries to lists. Referenced to the authors' datasheet for the dataset card. 🦆🦆🦆 Coin coin
closed
https://github.com/huggingface/datasets/pull/1076
2020-12-03T20:55:29
2020-12-04T16:36:39
2020-12-04T09:15:20
{ "login": "VictorSanh", "id": 16107619, "type": "User" }
[]
true
[]
756,501,235
1,075
adding cleaned verion of E2E NLG
Found at: https://github.com/tuetschek/e2e-cleaning
closed
https://github.com/huggingface/datasets/pull/1075
2020-12-03T19:21:07
2020-12-03T19:43:56
2020-12-03T19:43:56
{ "login": "yjernite", "id": 10469459, "type": "User" }
[]
true
[]
756,483,172
1,074
Swedish MT STS-B
Added a Swedish machine translated version of the well known STS-B Corpus
closed
https://github.com/huggingface/datasets/pull/1074
2020-12-03T19:06:25
2020-12-04T20:22:27
2020-12-03T20:44:28
{ "login": "timpal0l", "id": 6556710, "type": "User" }
[]
true
[]
756,468,034
1,073
Add DialogRE dataset
Adding the [DialogRE](https://github.com/nlpdata/dialogre) dataset Version 2. - All tests passed successfully.
closed
https://github.com/huggingface/datasets/pull/1073
2020-12-03T18:56:40
2020-12-20T13:34:48
2020-12-04T13:41:51
{ "login": "vineeths96", "id": 50873201, "type": "User" }
[]
true
[]
756,454,511
1,072
actually uses the previously declared VERSION on the configs in the template
closed
https://github.com/huggingface/datasets/pull/1072
2020-12-03T18:44:27
2020-12-03T19:35:46
2020-12-03T19:35:46
{ "login": "yjernite", "id": 10469459, "type": "User" }
[]
true
[]
756,447,296
1,071
add xlrd to test package requirements
Adds `xlrd` package to the test requirements to handle scripts that use `pandas` to load excel files
closed
https://github.com/huggingface/datasets/pull/1071
2020-12-03T18:32:47
2020-12-03T18:47:16
2020-12-03T18:47:16
{ "login": "yjernite", "id": 10469459, "type": "User" }
[]
true
[]
756,442,481
1,070
add conv_ai
Adding ConvAI dataset https://github.com/DeepPavlov/convai/tree/master/2017
closed
https://github.com/huggingface/datasets/pull/1070
2020-12-03T18:25:20
2020-12-04T07:58:35
2020-12-04T06:44:34
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]
756,425,737
1,069
Test
closed
https://github.com/huggingface/datasets/pull/1069
2020-12-03T18:01:45
2020-12-04T04:24:18
2020-12-04T04:24:11
{ "login": "manandey", "id": 6687858, "type": "User" }
[]
true
[]
756,417,337
1,068
Add Pubmed (citation + abstract) dataset (2020).
null
closed
https://github.com/huggingface/datasets/pull/1068
2020-12-03T17:54:10
2020-12-23T09:52:07
2020-12-23T09:52:07
{ "login": "Narsil", "id": 204321, "type": "User" }
[]
true
[]
756,414,212
1,067
add xquad-r dataset
closed
https://github.com/huggingface/datasets/pull/1067
2020-12-03T17:50:01
2020-12-03T17:53:21
2020-12-03T17:53:15
{ "login": "manandey", "id": 6687858, "type": "User" }
[]
true
[]
756,391,957
1,066
Add ChrEn
Adding the Cherokee English machine translation dataset of https://github.com/ZhangShiyue/ChrEn
closed
https://github.com/huggingface/datasets/pull/1066
2020-12-03T17:17:48
2020-12-03T21:49:39
2020-12-03T21:49:39
{ "login": "yjernite", "id": 10469459, "type": "User" }
[]
true
[]
756,383,414
1,065
add xquad-r dataset
closed
https://github.com/huggingface/datasets/pull/1065
2020-12-03T17:06:23
2020-12-03T17:42:21
2020-12-03T17:42:03
{ "login": "manandey", "id": 6687858, "type": "User" }
[]
true
[]
756,382,186
1,064
Not support links with 302 redirect
I have an issue adding this download link https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz it might be because it is not a direct link (it returns 302 and redirects to aws that returns 403 for head requests). ``` r.head("https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz", allow_redirects=True) # <Response [403]> ```
closed
https://github.com/huggingface/datasets/issues/1064
2020-12-03T17:04:43
2021-01-14T02:51:25
2021-01-14T02:51:25
{ "login": "chameleonTK", "id": 6429850, "type": "User" }
[ { "name": "bug", "color": "d73a4a" }, { "name": "enhancement", "color": "a2eeef" } ]
false
[]
756,376,374
1,063
Add the Ud treebank
This PR adds the 183 datasets in 104 languages of the UD Treebank.
closed
https://github.com/huggingface/datasets/pull/1063
2020-12-03T16:56:41
2020-12-04T16:11:54
2020-12-04T15:51:46
{ "login": "jplu", "id": 959590, "type": "User" }
[]
true
[]
756,373,187
1,062
Add KorNLU dataset
Added Korean NLU datasets. The link to the dataset can be found [here](https://github.com/kakaobrain/KorNLUDatasets) and the paper can be found [here](https://arxiv.org/abs/2004.03289) **Note**: The MNLI tsv file is broken, so this code currently excludes the file. Please suggest other alternative if any @lhoestq - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
closed
https://github.com/huggingface/datasets/pull/1062
2020-12-03T16:52:39
2020-12-04T11:05:19
2020-12-04T11:05:19
{ "login": "sumanthd17", "id": 28291870, "type": "User" }
[]
true
[]
756,362,661
1,061
add labr dataset
Arabic Book Reviews dataset.
closed
https://github.com/huggingface/datasets/pull/1061
2020-12-03T16:38:57
2020-12-03T18:25:44
2020-12-03T18:25:44
{ "login": "zaidalyafeai", "id": 15667714, "type": "User" }
[]
true
[]
756,349,001
1,060
Fix squad V2 metric script
The current squad v2 metric doesn't work with the squad (v1 or v2) datasets. The script is copied from `squad_evaluate` in transformers that requires the labels (with multiple answers) to be like this: ``` references = [{'id': 'a', 'answers': [ {'text': 'Denver Broncos', 'answer_start': 177}, {'text': 'Denver Broncos', 'answer_start': 177} ]}] ``` while the dataset had references like this: ``` references = [{'id': 'a', 'answers': {'text': ['Denver Broncos' 'Denver Broncos'], 'answer_start': [177, 177]} }] ``` Using one or the other format fails with the current squad v2 metric: ``` from datasets import load_metric metric = load_metric("squad_v2") predictions = [{'id': 'a', 'prediction_text': 'Denver Broncos', 'no_answer_probability': 0.0}] references = [{'id': 'a', 'answers': [ {'text': 'Denver Broncos', 'answer_start': 177}, {'text': 'Denver Broncos', 'answer_start': 177} ]}] metric.compute(predictions=predictions, references=references) ``` fails as well as ``` from datasets import load_metric metric = load_metric("squad_v2") predictions = [{'id': 'a', 'prediction_text': 'Denver Broncos', 'no_answer_probability': 0.0}] references = [{'id': 'a', 'answers': {'text': ['Denver Broncos' 'Denver Broncos'], 'answer_start': [177, 177]} }] metric.compute(predictions=predictions, references=references) ``` This is because arrow reformats the references behind the scene. With this PR (tested locally), both the snippets up there work and return proper results.
closed
https://github.com/huggingface/datasets/pull/1060
2020-12-03T16:23:32
2020-12-22T15:02:20
2020-12-22T15:02:19
{ "login": "sgugger", "id": 35901082, "type": "User" }
[]
true
[]
756,348,623
1,059
Add TLC
Added TLC dataset
closed
https://github.com/huggingface/datasets/pull/1059
2020-12-03T16:23:06
2020-12-04T11:15:33
2020-12-04T11:15:33
{ "login": "chameleonTK", "id": 6429850, "type": "User" }
[]
true
[]
756,332,704
1,058
added paws-x dataset
Added paws-x dataset. Updating README and tags in the dataset card in a while
closed
https://github.com/huggingface/datasets/pull/1058
2020-12-03T16:06:01
2020-12-04T13:46:05
2020-12-04T13:46:05
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
756,331,419
1,057
Adding TamilMixSentiment
closed
https://github.com/huggingface/datasets/pull/1057
2020-12-03T16:04:25
2020-12-04T10:09:34
2020-12-04T10:09:12
{ "login": "jamespaultg", "id": 7421838, "type": "User" }
[]
true
[]
756,309,828
1,056
Add deal_or_no_dialog
Add deal_or_no_dialog Dataset github: https://github.com/facebookresearch/end-to-end-negotiator Paper: [Deal or No Deal? End-to-End Learning for Negotiation Dialogues](https://arxiv.org/abs/1706.05125)
closed
https://github.com/huggingface/datasets/pull/1056
2020-12-03T15:38:07
2020-12-03T18:13:45
2020-12-03T18:13:45
{ "login": "moussaKam", "id": 28675016, "type": "User" }
[]
true
[]
756,298,372
1,055
Add hebrew-sentiment
hebrew-sentiment dataset is ready! (including tests, tags etc)
closed
https://github.com/huggingface/datasets/pull/1055
2020-12-03T15:24:31
2022-02-21T15:26:05
2020-12-04T11:24:16
{ "login": "elronbandel", "id": 23455264, "type": "User" }
[]
true
[]
756,265,688
1,054
Add dataset - SemEval 2014 - Task 1
Adding the dataset of SemEval 2014 Task 1 Found the dataset under the shared Google Sheet > Recurring Task Datasets Task Homepage - https://alt.qcri.org/semeval2014/task1 Thank you!
closed
https://github.com/huggingface/datasets/pull/1054
2020-12-03T14:52:59
2020-12-04T00:52:44
2020-12-04T00:52:44
{ "login": "ashmeet13", "id": 24266995, "type": "User" }
[]
true
[]
756,176,061
1,053
Fix dataset URL and file names, and add column name in "Social Bias Frames" dataset
# Why I did When I use "social_bias_frames" datasets in this library, I got 404 Errors. So, I fixed this error and another some problems that I faced to use the dataset. # What I did * Modify this dataset URL * Modify this dataset file names * Add a "dataSource" column Thank you!
closed
https://github.com/huggingface/datasets/pull/1053
2020-12-03T13:03:05
2020-12-03T13:42:26
2020-12-03T13:42:26
{ "login": "otakumesi", "id": 14996977, "type": "User" }
[]
true
[]
756,171,798
1,052
add sharc dataset
This PR adds the ShARC dataset. More info: https://sharc-data.github.io/index.html
closed
https://github.com/huggingface/datasets/pull/1052
2020-12-03T12:57:23
2020-12-03T16:44:21
2020-12-03T14:09:54
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]
756,169,049
1,051
Add Facebook SimpleQuestionV2
Add simple questions v2: https://research.fb.com/downloads/babi/
closed
https://github.com/huggingface/datasets/pull/1051
2020-12-03T12:53:20
2020-12-03T17:31:59
2020-12-03T17:31:58
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
756,166,728
1,050
Add GoEmotions
Adds the GoEmotions dataset, a nice emotion classification dataset with 27 (multi-)label annotations on reddit comments. Includes both a large raw version and a narrowed version with predefined train/test/val splits, which I've included as separate configs with the latter as a default. - Webpage/repo: https://github.com/google-research/google-research/tree/master/goemotions - Paper: https://arxiv.org/abs/2005.00547
closed
https://github.com/huggingface/datasets/pull/1050
2020-12-03T12:49:53
2020-12-03T17:37:45
2020-12-03T17:30:08
{ "login": "joeddav", "id": 9353833, "type": "User" }
[]
true
[]
756,157,602
1,049
Add siswati ner corpus
closed
https://github.com/huggingface/datasets/pull/1049
2020-12-03T12:36:00
2020-12-03T17:27:02
2020-12-03T17:26:55
{ "login": "yvonnegitau", "id": 7923902, "type": "User" }
[]
true
[]
756,133,072
1,048
Adding NCHLT dataset
https://repo.sadilar.org/handle/20.500.12185/7/discover?filtertype_0=database&filtertype_1=title&filter_relational_operator_1=contains&filter_relational_operator_0=equals&filter_1=&filter_0=Monolingual+Text+Corpora%3A+Annotated&filtertype=project&filter_relational_operator=equals&filter=NCHLT+Text+II
closed
https://github.com/huggingface/datasets/pull/1048
2020-12-03T11:59:25
2020-12-04T13:29:57
2020-12-04T13:29:57
{ "login": "Narsil", "id": 204321, "type": "User" }
[]
true
[]
756,127,490
1,047
Add KorNLU
Added Korean NLU datasets. The link to the dataset can be found [here](https://github.com/kakaobrain/KorNLUDatasets) and the paper can be found [here](https://arxiv.org/abs/2004.03289) **Note**: The MNLI tsv file is broken, so this code currently excludes the file. Please suggest other alternative if any @lhoestq - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
closed
https://github.com/huggingface/datasets/pull/1047
2020-12-03T11:50:54
2020-12-03T17:17:07
2020-12-03T17:16:09
{ "login": "sumanthd17", "id": 28291870, "type": "User" }
[]
true
[]
756,122,709
1,046
Dataset.map() turns tensors into lists?
I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists! ```import datasets import torch from datasets import load_dataset print("version datasets", datasets.__version__) dataset = load_dataset("snli", split='train[0:50]') def tokenizer_fn(example): # actually uses a tokenizer which does something like: return {'input_ids': torch.tensor([[0, 1, 2]])} print("First item in dataset:\n", dataset[0]) tokenized = tokenizer_fn(dataset[0]) print("Tokenized hyp:\n", tokenized) dataset_tok = dataset.map(tokenizer_fn, batched=False, remove_columns=['label', 'premise', 'hypothesis']) print("Tokenized using map:\n", dataset_tok[0]) print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids'])) dataset_tok = dataset.map(tokenizer_fn, batched=False, remove_columns=['label', 'premise', 'hypothesis']) print("Tokenized using map:\n", dataset_tok[0]) print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids'])) ``` The output is: ``` version datasets 1.1.3 Reusing dataset snli (/home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c) First item in dataset: {'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 1} Tokenized hyp: {'input_ids': tensor([[0, 1, 2]])} Loading cached processed dataset at /home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c/cache-fe38f449fe9ac46f.arrow Tokenized using map: {'input_ids': [[0, 1, 2]]} <class 'torch.Tensor'> <class 'list'> ``` Or am I doing something wrong?
closed
https://github.com/huggingface/datasets/issues/1046
2020-12-03T11:43:46
2022-10-05T12:12:41
2022-10-05T12:12:41
{ "login": "tombosc", "id": 5270804, "type": "User" }
[]
false
[]
756,120,760
1,045
Add xitsonga ner corpus
closed
https://github.com/huggingface/datasets/pull/1045
2020-12-03T11:40:48
2020-12-03T17:20:03
2020-12-03T17:19:32
{ "login": "yvonnegitau", "id": 7923902, "type": "User" }
[]
true
[]
756,111,647
1,044
Add AMTTL Chinese Word Segmentation Dataset
closed
https://github.com/huggingface/datasets/pull/1044
2020-12-03T11:27:52
2020-12-03T17:13:14
2020-12-03T17:13:13
{ "login": "JetRunner", "id": 22514219, "type": "User" }
[]
true
[]
756,100,717
1,043
Add TSAC: Tunisian Sentiment Analysis Corpus
github: https://github.com/fbougares/TSAC paper: https://www.aclweb.org/anthology/W17-1307/
closed
https://github.com/huggingface/datasets/pull/1043
2020-12-03T11:12:35
2020-12-03T13:35:05
2020-12-03T13:32:24
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
756,097,583
1,042
Add Big Patent dataset
- More info on the dataset: https://evasharma.github.io/bigpatent/ - There's another raw version of the dataset available from tfds. However, they're quite large so I don't have the resources to fully test all the configs for that version yet. We'll try to add it later. - ~Currently, there are no dummy data for this dataset yet as I'm facing some problems with generating them. I'm trying to add them later.~
closed
https://github.com/huggingface/datasets/pull/1042
2020-12-03T11:07:59
2020-12-04T04:38:26
2020-12-04T04:38:26
{ "login": "mattbui", "id": 46804938, "type": "User" }
[]
true
[]
756,055,102
1,041
Add SuperGLUE metric
Adds a new metric for the SuperGLUE benchmark (similar to the GLUE benchmark metric).
closed
https://github.com/huggingface/datasets/pull/1041
2020-12-03T10:11:34
2021-02-23T19:02:59
2021-02-23T18:02:12
{ "login": "calpt", "id": 36051308, "type": "User" }
[]
true
[]
756,050,387
1,040
Add UN Universal Declaration of Human Rights (UDHR)
Universal declaration of human rights with translations in 464 languages and dialects. - UN page: https://www.ohchr.org/EN/UDHR/Pages/UDHRIndex.aspx - Raw data source: https://unicode.org/udhr/index.html Each instance of the dataset corresponds to one translation of the document. Since there's only one instance per language (and because there are 500 languages so the dummy data would be messy), I opted to just include them all under the same single config. I wasn't able to find any kind of license so I just copied the copyright notice. I was pretty careful careful generating the language tags so they _should_ all be correct & consistent BCP-47 codes per the docs.
closed
https://github.com/huggingface/datasets/pull/1040
2020-12-03T10:04:58
2020-12-03T19:20:15
2020-12-03T19:20:11
{ "login": "joeddav", "id": 9353833, "type": "User" }
[]
true
[]
756,000,478
1,039
Update ADD NEW DATASET
This PR adds a couple of detail on cloning/rebasing the repo.
closed
https://github.com/huggingface/datasets/pull/1039
2020-12-03T08:58:32
2020-12-03T09:18:28
2020-12-03T09:18:10
{ "login": "jplu", "id": 959590, "type": "User" }
[]
true
[]
755,987,997
1,038
add med_hop
This PR adds the MedHop dataset from the QAngaroo multi hop reading comprehension datasets More info: http://qangaroo.cs.ucl.ac.uk/index.html
closed
https://github.com/huggingface/datasets/pull/1038
2020-12-03T08:40:27
2020-12-03T16:53:13
2020-12-03T16:52:23
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]
755,975,586
1,037
Fix docs indentation issues
Replace tabs with spaces.
closed
https://github.com/huggingface/datasets/pull/1037
2020-12-03T08:21:34
2020-12-22T16:01:15
2020-12-22T16:01:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
755,953,294
1,036
Add PerSenT
Added [Person's SentimenT](https://stonybrooknlp.github.io/PerSenT/) dataset.
closed
https://github.com/huggingface/datasets/pull/1036
2020-12-03T07:43:58
2020-12-14T13:40:43
2020-12-14T13:40:43
{ "login": "jeromeku", "id": 2455711, "type": "User" }
[]
true
[]
755,947,097
1,035
add wiki_hop
This PR adds the WikiHop dataset from the QAngaroo multi hop reading comprehension datasets More info: http://qangaroo.cs.ucl.ac.uk/index.html
closed
https://github.com/huggingface/datasets/pull/1035
2020-12-03T07:32:26
2020-12-03T16:43:40
2020-12-03T16:41:12
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]