id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
759,705,835
1,335
Added Bianet dataset
Hi :hugs:, This is a PR for [Bianet: A parallel news corpus in Turkish, Kurdish and English; Source](http://opus.nlpl.eu/Bianet.php) dataset
closed
https://github.com/huggingface/datasets/pull/1335
2020-12-08T19:10:32
2020-12-14T10:00:56
2020-12-14T10:00:56
{ "login": "param087", "id": 26374564, "type": "User" }
[]
true
[]
759,699,993
1,334
Add QED Amara Dataset
closed
https://github.com/huggingface/datasets/pull/1334
2020-12-08T19:01:13
2020-12-10T11:17:25
2020-12-10T11:15:57
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
759,687,836
1,333
Add Tanzil Dataset
closed
https://github.com/huggingface/datasets/pull/1333
2020-12-08T18:45:15
2020-12-10T11:17:56
2020-12-10T11:14:43
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
759,679,135
1,332
Add Open Subtitles Dataset
closed
https://github.com/huggingface/datasets/pull/1332
2020-12-08T18:31:45
2020-12-10T11:17:38
2020-12-10T11:13:18
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
759,677,189
1,331
First version of the new dataset hausa_voa_topics
Contains loading script as well as dataset card including YAML tags.
closed
https://github.com/huggingface/datasets/pull/1331
2020-12-08T18:28:52
2020-12-10T11:09:53
2020-12-10T11:09:53
{ "login": "michael-aloys", "id": 1858628, "type": "User" }
[]
true
[]
759,657,324
1,330
added un_ga dataset
Hi :hugs:, This is a PR for [United nations general assembly resolutions: A six-language parallel corpus](http://opus.nlpl.eu/UN.php) dataset
closed
https://github.com/huggingface/datasets/pull/1330
2020-12-08T17:58:38
2020-12-14T17:52:34
2020-12-14T17:52:34
{ "login": "param087", "id": 26374564, "type": "User" }
[]
true
[]
759,654,174
1,329
Add yoruba ner corpus
closed
https://github.com/huggingface/datasets/pull/1329
2020-12-08T17:54:00
2020-12-08T23:11:12
2020-12-08T23:11:12
{ "login": "dadelani", "id": 23586676, "type": "User" }
[]
true
[]
759,634,907
1,328
Added the NewsPH Raw dataset and corresponding dataset card
This PR adds the original NewsPH dataset which is used to autogenerate the NewsPH-NLI dataset. Reopened a new PR as the previous one had problems. Paper: https://arxiv.org/abs/2010.11574 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
closed
https://github.com/huggingface/datasets/pull/1328
2020-12-08T17:25:45
2020-12-10T11:04:34
2020-12-10T11:04:34
{ "login": "jcblaisecruz02", "id": 24757547, "type": "User" }
[]
true
[]
759,629,321
1,327
Add msr_genomics_kbcomp dataset
closed
https://github.com/huggingface/datasets/pull/1327
2020-12-08T17:18:20
2020-12-08T18:18:32
2020-12-08T18:18:06
{ "login": "manandey", "id": 6687858, "type": "User" }
[]
true
[]
759,611,784
1,326
TEP: Tehran English-Persian parallel corpus
TEP: Tehran English-Persian parallel corpus more info : http://opus.nlpl.eu/TEP.php
closed
https://github.com/huggingface/datasets/pull/1326
2020-12-08T16:56:53
2020-12-19T14:55:03
2020-12-10T11:25:17
{ "login": "spatil6", "id": 6419011, "type": "User" }
[]
true
[]
759,595,556
1,325
Add humicroedit dataset
Pull request for adding humicroedit dataset
closed
https://github.com/huggingface/datasets/pull/1325
2020-12-08T16:35:46
2020-12-17T17:59:09
2020-12-17T17:59:09
{ "login": "saradhix", "id": 1351362, "type": "User" }
[]
true
[]
759,587,864
1,324
❓ Sharing ElasticSearch indexed dataset
Hi there, First of all, thank you very much for this amazing library. Datasets have become my preferred data structure for basically everything I am currently doing. **Question:** I'm working with a dataset and I have an elasticsearch container running at localhost:9200. I added an elasticsearch index and I was wondering - how can I know where it has been saved? - how can I share the indexed dataset with others? I tried to dig into the docs, but could not find anything about that. Thank you very much for your help. Best, Pietro Edit: apologies for the wrong label
open
https://github.com/huggingface/datasets/issues/1324
2020-12-08T16:25:58
2020-12-22T07:50:56
null
{ "login": "pietrolesci", "id": 61748653, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
759,581,919
1,323
Add CC-News dataset of English language articles
Adds [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/) dataset. It contains 708241 English language news articles. Although each article has a language field these tags are not reliable. I've used Spacy language detection [pipeline](https://spacy.io/universe/project/spacy-langdetect) to confirm that the article language is indeed English. The prepared dataset is temporarily hosted on my private Google Storage [bucket](https://storage.googleapis.com/hf_datasets/cc_news.tar.gz). We can move it to HF storage and update this PR before merging.
closed
https://github.com/huggingface/datasets/pull/1323
2020-12-08T16:18:15
2021-02-01T16:55:49
2021-02-01T16:55:49
{ "login": "vblagoje", "id": 458335, "type": "User" }
[]
true
[]
759,576,003
1,322
add indonlu benchmark datasets
The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for the Indonesian language. There are 12 datasets in IndoNLU.
closed
https://github.com/huggingface/datasets/pull/1322
2020-12-08T16:10:58
2020-12-13T02:11:27
2020-12-13T01:54:28
{ "login": "yasirabd", "id": 6518504, "type": "User" }
[]
true
[]
759,573,610
1,321
added dutch_social
The Dutch social media tweets dataset. Which has a total of more than 210k tweets in dutch language. These tweets have been machine annotated with sentiment scores (`label` feature) and `industry` and `hisco_codes` It can be used for sentiment analysis, multi-label classification and entity tagging
closed
https://github.com/huggingface/datasets/pull/1321
2020-12-08T16:07:54
2020-12-16T10:14:17
2020-12-16T10:14:17
{ "login": "skyprince999", "id": 9033954, "type": "User" }
[]
true
[]
759,566,148
1,320
Added the WikiText-TL39 dataset and corresponding card
This PR adds the WikiText-TL-39 Filipino Language Modeling dataset. Restarted a new pull request since there were problems with the earlier one. Paper: https://arxiv.org/abs/1907.00409 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
closed
https://github.com/huggingface/datasets/pull/1320
2020-12-08T16:00:26
2020-12-10T11:24:53
2020-12-10T11:24:53
{ "login": "jcblaisecruz02", "id": 24757547, "type": "User" }
[]
true
[]
759,565,923
1,319
adding wili-2018 language identification dataset
closed
https://github.com/huggingface/datasets/pull/1319
2020-12-08T16:00:09
2020-12-14T21:20:32
2020-12-14T21:20:32
{ "login": "Shubhambindal2017", "id": 31540058, "type": "User" }
[]
true
[]
759,565,629
1,318
ethos first commit
Ethos passed all the tests except from this one: RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<your-dataset-name> with this error: E OSError: Cannot find data file. E Original error: E [Errno 2] No such file or directory:
closed
https://github.com/huggingface/datasets/pull/1318
2020-12-08T15:59:47
2020-12-10T14:45:57
2020-12-10T14:45:57
{ "login": "iamollas", "id": 22838900, "type": "User" }
[]
true
[]
759,553,495
1,317
add 10k German News Article Dataset
closed
https://github.com/huggingface/datasets/pull/1317
2020-12-08T15:44:25
2021-09-17T16:55:51
2020-12-16T16:50:43
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[]
true
[]
759,549,601
1,316
Allow GitHub releases as dataset source
# Summary Providing a GitHub release URL to `DownloadManager.download()` currently throws a `ConnectionError: Couldn't reach [DOWNLOAD_URL]`. This PR fixes this problem by adding an exception for GitHub releases in `datasets.utils.file_utils.get_from_cache()`. # Reproduce ``` import datasets url = 'http://github.com/benjaminvdb/DBRD/releases/download/v3.0/DBRD_v3.tgz' result = datasets.utils.file_utils.get_from_cache(url) # Returns: ConnectionError: Couldn't reach http://github.com/benjaminvdb/DBRD/releases/download/v3.0/DBRD_v3.tgz ``` # Cause GitHub releases returns a HTTP status 403 (FOUND), indicating that the request is being redirected (to AWS S3, in this case). `get_from_cache()` checks whether the status is 200 (OK) or if it is part of two exceptions (Google Drive or Firebase), otherwise the mentioned error is thrown. # Solution Just like the exceptions for Google Drive and Firebase, add a condition for GitHub releases URLs that return the HTTP status 403. If this is the case, continue normally.
closed
https://github.com/huggingface/datasets/pull/1316
2020-12-08T15:39:35
2020-12-10T10:12:00
2020-12-10T10:12:00
{ "login": "benjaminvdb", "id": 8875786, "type": "User" }
[]
true
[]
759,548,706
1,315
add yelp_review_full
This corresponds to the Yelp-5 requested in https://github.com/huggingface/datasets/issues/353 I included the dataset card.
closed
https://github.com/huggingface/datasets/pull/1315
2020-12-08T15:38:27
2020-12-09T15:55:49
2020-12-09T15:55:49
{ "login": "hfawaz", "id": 29229602, "type": "User" }
[]
true
[]
759,541,937
1,314
Add snips built in intents 2016 12
This PR proposes to add the Snips.ai built in intents dataset. The first configuration added is for the intent labels only, but the dataset includes entity slots that may in future be added as alternate configurations.
closed
https://github.com/huggingface/datasets/pull/1314
2020-12-08T15:30:19
2020-12-14T09:59:07
2020-12-14T09:59:07
{ "login": "bduvenhage", "id": 8405335, "type": "User" }
[]
true
[]
759,536,512
1,313
Add HateSpeech Corpus for Polish
This PR adds a HateSpeech Corpus for Polish, containing offensive language examples. - **Homepage:** http://zil.ipipan.waw.pl/HateSpeech - **Paper:** http://www.qualitativesociologyreview.org/PL/Volume38/PSJ_13_2_Troszynski_Wawer.pdf
closed
https://github.com/huggingface/datasets/pull/1313
2020-12-08T15:23:53
2020-12-16T16:48:45
2020-12-16T16:48:45
{ "login": "kacperlukawski", "id": 2649301, "type": "User" }
[]
true
[]
759,532,626
1,312
Jigsaw toxicity pred
Requires manually downloading data from Kaggle.
closed
https://github.com/huggingface/datasets/pull/1312
2020-12-08T15:19:14
2020-12-11T12:11:32
2020-12-11T12:11:32
{ "login": "taihim", "id": 13764071, "type": "User" }
[]
true
[]
759,514,819
1,311
Add OPUS Bible Corpus (102 Languages)
closed
https://github.com/huggingface/datasets/pull/1311
2020-12-08T14:57:08
2020-12-09T15:30:57
2020-12-09T15:30:56
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
759,508,921
1,310
Add OffensEval-TR 2020 Dataset
This PR adds the OffensEval-TR 2020 dataset which is a Turkish offensive language corpus by me and @basakbuluz. The corpus consist of randomly sampled tweets and annotated in a similar way to [OffensEval](https://sites.google.com/site/offensevalsharedtask/) and [GermEval](https://projects.fzai.h-da.de/iggsa/). - **Homepage:** [offensive-turkish](https://coltekin.github.io/offensive-turkish/) - **Paper:** [A Corpus of Turkish Offensive Language on Social Media](https://coltekin.github.io/offensive-turkish/troff.pdf) - **Point of Contact:** [Çağrı Çöltekin](ccoltekin@sfs.uni-tuebingen.de)
closed
https://github.com/huggingface/datasets/pull/1310
2020-12-08T14:49:51
2020-12-12T14:15:42
2020-12-09T16:02:06
{ "login": "yavuzKomecoglu", "id": 5150963, "type": "User" }
[]
true
[]
759,501,370
1,309
Add SAMSum Corpus dataset
Did not spent much time writing README, might update later. Copied description and some stuff from tensorflow_datasets https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/summarization/samsum.py
closed
https://github.com/huggingface/datasets/pull/1309
2020-12-08T14:40:56
2020-12-14T12:32:33
2020-12-14T10:20:55
{ "login": "changjonathanc", "id": 31893406, "type": "User" }
[]
true
[]
759,492,953
1,308
Add Wiki Lingua Dataset
Hello, This is my first PR. I have added Wiki Lingua Dataset along with dataset card to the best of my knowledge. There was one hiccup though. I was unable to create dummy data because the data is in pkl format. From the document, I see that: ```At the moment it supports data files in the following format: txt, csv, tsv, jsonl, json, xml```
closed
https://github.com/huggingface/datasets/pull/1308
2020-12-08T14:30:13
2020-12-14T10:39:52
2020-12-14T10:39:52
{ "login": "katnoria", "id": 7674948, "type": "User" }
[]
true
[]
759,458,835
1,307
adding capes
Adding Parallel corpus of theses and dissertation abstracts in Portuguese and English from CAPES https://sites.google.com/view/felipe-soares/datasets#h.p_kxOR6EhHm2a6
closed
https://github.com/huggingface/datasets/pull/1307
2020-12-08T13:46:13
2020-12-09T15:40:09
2020-12-09T15:27:45
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]
759,448,427
1,306
add W&I + LOCNESS dataset (BEA-2019 workshop shared task on GEC)
- **Name:** W&I + LOCNESS dataset (from the BEA-2019 workshop shared task on GEC) - **Description:** https://www.cl.cam.ac.uk/research/nl/bea2019st/#data - **Paper:** https://www.aclweb.org/anthology/W19-4406/ - **Motivation:** This is a recent dataset (actually two in one) for grammatical error correction and is used for benchmarking in this field of NLP. ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
closed
https://github.com/huggingface/datasets/pull/1306
2020-12-08T13:31:34
2020-12-10T09:53:54
2020-12-10T09:53:28
{ "login": "aseifert", "id": 4944799, "type": "User" }
[]
true
[]
759,446,665
1,305
[README] Added Windows command to enable slow tests
The Windows command to run slow tests has caused issues, so this adds a functional Windows command.
closed
https://github.com/huggingface/datasets/pull/1305
2020-12-08T13:29:04
2020-12-08T13:56:33
2020-12-08T13:56:32
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[]
true
[]
759,440,841
1,304
adding eitb_parcc
Adding EiTB-ParCC: Parallel Corpus of Comparable News http://opus.nlpl.eu/EiTB-ParCC.php
closed
https://github.com/huggingface/datasets/pull/1304
2020-12-08T13:20:54
2020-12-09T18:02:54
2020-12-09T18:02:03
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]
759,440,484
1,303
adding opus_openoffice
Adding Opus OpenOffice: http://opus.nlpl.eu/OpenOffice.php 8 languages, 28 bitexts
closed
https://github.com/huggingface/datasets/pull/1303
2020-12-08T13:20:21
2020-12-10T09:37:10
2020-12-10T09:37:10
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]
759,435,740
1,302
Add Danish NER dataset
closed
https://github.com/huggingface/datasets/pull/1302
2020-12-08T13:13:54
2020-12-10T09:35:26
2020-12-10T09:35:26
{ "login": "ophelielacroix", "id": 28562991, "type": "User" }
[]
true
[]
759,419,945
1,301
arxiv dataset added
**adding arXiv dataset**: arXiv dataset and metadata of 1.7M+ scholarly papers across STEM dataset link: https://www.kaggle.com/Cornell-University/arxiv
closed
https://github.com/huggingface/datasets/pull/1301
2020-12-08T12:50:51
2020-12-09T18:05:16
2020-12-09T18:05:16
{ "login": "tanmoyio", "id": 33005287, "type": "User" }
[]
true
[]
759,418,122
1,300
added dutch_social
WIP As some tests did not clear! 👎🏼
closed
https://github.com/huggingface/datasets/pull/1300
2020-12-08T12:47:50
2020-12-08T16:09:05
2020-12-08T16:09:05
{ "login": "skyprince999", "id": 9033954, "type": "User" }
[]
true
[]
759,414,566
1,299
can't load "german_legal_entity_recognition" dataset
FileNotFoundError: Couldn't find file locally at german_legal_entity_recognition/german_legal_entity_recognition.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py
closed
https://github.com/huggingface/datasets/issues/1299
2020-12-08T12:42:01
2020-12-16T16:03:13
2020-12-16T16:03:13
{ "login": "nataly-obr", "id": 59837137, "type": "User" }
[]
false
[]
759,412,451
1,298
Add OPUS Ted Talks 2013
closed
https://github.com/huggingface/datasets/pull/1298
2020-12-08T12:38:38
2020-12-16T16:57:50
2020-12-16T16:57:49
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
759,404,103
1,297
OPUS Ted Talks 2013
closed
https://github.com/huggingface/datasets/pull/1297
2020-12-08T12:25:39
2023-09-24T09:51:49
2020-12-08T12:35:50
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
759,375,292
1,296
The Snips Built In Intents 2016 dataset.
This PR proposes to add the Snips.ai built in intents dataset. The first configuration added is for the intent labels only, but the dataset includes entity slots that may in future be added as alternate configurations.
closed
https://github.com/huggingface/datasets/pull/1296
2020-12-08T11:40:10
2020-12-08T15:27:52
2020-12-08T15:27:52
{ "login": "bduvenhage", "id": 8405335, "type": "User" }
[]
true
[]
759,375,251
1,295
add hrenwac_para
closed
https://github.com/huggingface/datasets/pull/1295
2020-12-08T11:40:06
2020-12-11T17:42:20
2020-12-11T17:42:20
{ "login": "IvanZidov", "id": 11391118, "type": "User" }
[]
true
[]
759,365,246
1,294
adding opus_euconst
Adding EUconst, a parallel corpus collected from the European Constitution. 21 languages, 210 bitexts
closed
https://github.com/huggingface/datasets/pull/1294
2020-12-08T11:24:16
2020-12-08T18:44:20
2020-12-08T18:41:23
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]
759,360,113
1,293
add hrenwac_para
closed
https://github.com/huggingface/datasets/pull/1293
2020-12-08T11:16:41
2020-12-08T11:34:47
2020-12-08T11:34:38
{ "login": "ivan-zidov", "id": 51969305, "type": "User" }
[]
true
[]
759,354,627
1,292
arXiv dataset added
closed
https://github.com/huggingface/datasets/pull/1292
2020-12-08T11:08:28
2020-12-08T14:02:13
2020-12-08T14:02:13
{ "login": "tanmoyio", "id": 33005287, "type": "User" }
[]
true
[]
759,352,810
1,291
adding pubmed_qa dataset
Pubmed QA dataset: PQA-L(abeled) 1k PQA-U(labeled) 61.2k PQA-A(rtifical labeled) 211.3k
closed
https://github.com/huggingface/datasets/pull/1291
2020-12-08T11:05:44
2020-12-09T08:54:50
2020-12-09T08:54:50
{ "login": "tuner007", "id": 46425391, "type": "User" }
[]
true
[]
759,339,989
1,290
imdb dataset cannot be downloaded
hi please find error below getting imdb train spli: thanks ` datasets.load_dataset>>> datasets.load_dataset("imdb", split="train")` errors ``` cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets Downloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown size, total: 207.28 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/imdb/plain_text/1.0.0/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3... cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=7486451, num_examples=5628, dataset_name='imdb')}] ```
closed
https://github.com/huggingface/datasets/issues/1290
2020-12-08T10:47:36
2020-12-24T17:38:09
2020-12-24T17:38:09
{ "login": "rabeehk", "id": 6278280, "type": "User" }
[]
false
[]
759,333,684
1,289
Jigsaw toxicity classification dataset added
The dataset requires manually downloading data from Kaggle.
closed
https://github.com/huggingface/datasets/pull/1289
2020-12-08T10:38:51
2020-12-08T15:17:48
2020-12-08T15:17:48
{ "login": "taihim", "id": 13764071, "type": "User" }
[]
true
[]
759,309,457
1,288
Add CodeSearchNet corpus dataset
This PR adds the CodeSearchNet corpus proxy dataset for semantic code search: https://github.com/github/CodeSearchNet I have had a few issues, mentioned below. Would appreciate some help on how to solve them. ## Issues generating dataset card Is there something wrong with my declaration of the dataset features ? ``` features=datasets.Features( { "repository_name": datasets.Value("string"), "func_path_in_repository": datasets.Value("string"), "func_name": datasets.Value("string"), "whole_func_string": datasets.Value("string"), "language": datasets.Value("string"), "func_code_string": datasets.Value("string"), "func_code_tokens": datasets.Sequence(datasets.Value("string")), "func_documentation_string": datasets.Value("string"), "func_documentation_tokens": datasets.Sequence(datasets.Value("string")), "split_name": datasets.Value("string"), "func_code_url": datasets.Value("string"), # TODO - add licensing info in the examples } ), ``` When running the streamlite app for tagging the dataset on my machine, I get the following error : ![image](https://user-images.githubusercontent.com/33657802/101469132-9ed12c80-3944-11eb-94ff-2d9c1d0ea080.png) ## Issues with dummy data Due to the unusual structure of the data, I have been unable to generate dummy data automatically. I tried to generate it manually, but pytests fail when using the manually-generated dummy data ! Pytests work fine when using the real data. ``` ============================================================================================== test session starts ============================================================================================== platform linux -- Python 3.7.9, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 plugins: xdist-2.1.0, forked-1.3.0 collected 1 item tests/test_dataset_common.py F [100%] =================================================================================================== FAILURES ==================================================================================================== ________________________________________________________________________ LocalDatasetTest.test_load_dataset_all_configs_code_search_net _________________________________________________________________________ self = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_code_search_net>, dataset_name = 'code_search_net' @slow def test_load_dataset_all_configs(self, dataset_name): configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True) > self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True) tests/test_dataset_common.py:237: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/test_dataset_common.py:198: in check_load_dataset self.parent.assertTrue(len(dataset[split]) > 0) E AssertionError: False is not true --------------------------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------------------------- Downloading and preparing dataset code_search_net/all (download: 1.00 MiB, generated: 1.00 MiB, post-processed: Unknown size, total: 2.00 MiB) to /tmp/tmppx78sj24/code_search_net/all/1.0.0... Dataset code_search_net downloaded and prepared to /tmp/tmppx78sj24/code_search_net/all/1.0.0. Subsequent calls will reuse this data. --------------------------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------------------------- ... (irrelevant info - Deprecation warnings) ============================================================================================ short test summary info ============================================================================================ FAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_code_search_net - AssertionError: False is not true ========================================================================================= 1 failed, 4 warnings in 3.00s ======================================================================================== ``` ## Note : Data structure in S3 The data is stored on S3, and organized by programming languages. It is stored in the following repository structure: ``` . ├── <language_name> # e.g. python │   └── final │   └── jsonl │   ├── test │   │   └── <language_name>_test_0.jsonl.gz │   ├── train │   │   ├── <language_name>_train_0.jsonl.gz │   │   ├── <language_name>_train_1.jsonl.gz │   │   ├── ... │   │   └── <language_name>_train_n.jsonl.gz │   └── valid │   └── <language_name>_valid_0.jsonl.gz ├── <language_name>_dedupe_definitions_v2.pkl └── <language_name>_licenses.pkl ```
closed
https://github.com/huggingface/datasets/pull/1288
2020-12-08T10:07:50
2020-12-09T17:05:28
2020-12-09T17:05:28
{ "login": "SBrandeis", "id": 33657802, "type": "User" }
[]
true
[]
759,300,992
1,287
'iwslt2017-ro-nl', cannot be downloaded
Hi I am trying `>>> datasets.load_dataset("iwslt2017", 'iwslt2017-ro-nl', split="train")` getting this error thank you for your help ``` cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets Downloading and preparing dataset iwsl_t217/iwslt2017-ro-nl (download: 314.07 MiB, generated: 39.92 MiB, post-processed: Unknown size, total: 354.00 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/iwsl_t217/iwslt2017-ro-nl/1.0.0/cca6935a0851a8ceac1202a62c958738bdfa23c57a51bc52ac1c5ebd2aa172cd... cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/iwslt2017/cca6935a0851a8ceac1202a62c958738bdfa23c57a51bc52ac1c5ebd2aa172cd/iwslt2017.py", line 118, in _split_generators dl_dir = dl_manager.download_and_extract(MULTI_URL) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 216, in map_nested return function(data_struct) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 477, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz ```
closed
https://github.com/huggingface/datasets/issues/1287
2020-12-08T09:56:55
2022-06-13T10:41:33
2022-06-13T10:41:33
{ "login": "rabeehk", "id": 6278280, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
759,291,509
1,286
[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
Hi I am getting this error when evaluating on wmt16-ro-en using finetune_trainer.py of huggingface repo. thank for your help {'epoch': 20.0} 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:16<00:00, 1.22it/s] 12/08/2020 10:41:19 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/experiment/joint/finetune/lr-2e-5 12/08/2020 10:41:24 - INFO - __main__ - {'wmt16-en-ro': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 1998), 'qnli': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 5462), 'scitail': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 1303)} 12/08/2020 10:41:24 - INFO - __main__ - *** Evaluate *** 12/08/2020 10:41:24 - INFO - seq2seq.utils.utils - using task specific params for wmt16-en-ro: {'max_length': 300, 'num_beams': 4} 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - ***** Running Evaluation ***** 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - Num examples = 1998 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - Batch size = 64 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:37<00:00, 1.19s/it][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
closed
https://github.com/huggingface/datasets/issues/1286
2020-12-08T09:44:15
2020-12-12T19:36:22
2020-12-12T16:22:36
{ "login": "rabeehk", "id": 6278280, "type": "User" }
[]
false
[]
759,278,758
1,285
boolq does not work
Hi I am getting this error when trying to load boolq, thanks for your help ts_boolq_default_0.1.0_2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11.lock Traceback (most recent call last): File "finetune_t5_trainer.py", line 274, in <module> main() File "finetune_t5_trainer.py", line 147, in main for task in data_args.tasks] File "finetune_t5_trainer.py", line 147, in <listcomp> for task in data_args.tasks] File "/remote/idiap.svm/user.active/rkarimi/dev/ruse/seq2seq/tasks/tasks.py", line 58, in get_dataset dataset = self.load_dataset(split=split) File "/remote/idiap.svm/user.active/rkarimi/dev/ruse/seq2seq/tasks/tasks.py", line 54, in load_dataset return datasets.load_dataset(self.task.name, split=split) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 149, in download_custom custom_download(url, path) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py", line 516, in copy_v2 compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite) tensorflow.python.framework.errors_impl.AlreadyExistsError: file already exists
closed
https://github.com/huggingface/datasets/issues/1285
2020-12-08T09:28:47
2020-12-08T09:47:10
2020-12-08T09:47:10
{ "login": "rabeehk", "id": 6278280, "type": "User" }
[]
false
[]
759,269,920
1,284
Update coqa dataset url
`datasets.stanford.edu` is invalid.
closed
https://github.com/huggingface/datasets/pull/1284
2020-12-08T09:16:38
2020-12-08T18:19:09
2020-12-08T18:19:09
{ "login": "ojasaar", "id": 73708394, "type": "User" }
[]
true
[]
759,251,457
1,283
Add dutch book review dataset
- Name: Dutch Book Review Dataset (DBRD) - Description: The DBRD (pronounced dee-bird) dataset contains over 110k book reviews along with associated binary sentiment polarity labels and is intended as a benchmark for sentiment classification in Dutch. - Paper: https://arxiv.org/abs/1910.00896 - Data: https://github.com/benjaminvdb/DBRD - Motivation: A large (real-life) dataset of Dutch book reviews and sentiment polarity (positive/negative), based on the associated rating. Checks - [x] Create the dataset script /datasets/dbrd/dbrd.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _info(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class. - [x] Generate the metadata file dataset_infos.json for all configurations - [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card README.md using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
closed
https://github.com/huggingface/datasets/pull/1283
2020-12-08T08:50:48
2020-12-09T20:21:58
2020-12-09T17:25:25
{ "login": "benjaminvdb", "id": 8875786, "type": "User" }
[]
true
[]
759,208,335
1,282
add thaiqa_squad
Example format is a little different from SQuAD since `thaiqa` always have one answer per question so I added a check to convert answers to lists if they are not already one to future-proof additional questions that might have multiple answers. `thaiqa_squad` is an open-domain, extractive question answering dataset (4,000 questions in `train` and 74 questions in `dev`) in [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, originally created by [NECTEC](https://www.nectec.or.th/en/) from Wikipedia articles and adapted to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format by [PyThaiNLP](https://github.com/PyThaiNLP/).
closed
https://github.com/huggingface/datasets/pull/1282
2020-12-08T08:14:38
2020-12-08T18:36:18
2020-12-08T18:36:18
{ "login": "cstorm125", "id": 15519308, "type": "User" }
[]
true
[]
759,203,317
1,281
adding hybrid_qa
Adding HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data https://github.com/wenhuchen/HybridQA
closed
https://github.com/huggingface/datasets/pull/1281
2020-12-08T08:10:19
2020-12-08T18:09:28
2020-12-08T18:07:00
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]
759,151,028
1,280
disaster response messages dataset
closed
https://github.com/huggingface/datasets/pull/1280
2020-12-08T07:27:16
2020-12-09T16:21:57
2020-12-09T16:21:57
{ "login": "darshan-gandhi", "id": 44197177, "type": "User" }
[]
true
[]
759,108,726
1,279
added para_pat
Dataset link : https://figshare.com/articles/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632 Working on README.md currently
closed
https://github.com/huggingface/datasets/pull/1279
2020-12-08T06:28:47
2020-12-14T13:41:17
2020-12-14T13:41:17
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
758,988,465
1,278
Craigslist bargains
`craigslist_bargains` dataset from [here](https://worksheets.codalab.org/worksheets/0x453913e76b65495d8b9730d41c7e0a0c/)
closed
https://github.com/huggingface/datasets/pull/1278
2020-12-08T01:45:55
2020-12-09T00:46:15
2020-12-09T00:46:15
{ "login": "ZacharySBrown", "id": 7950786, "type": "User" }
[]
true
[]
758,965,936
1,276
add One Million Posts Corpus
- **Name:** One Million Posts Corpus - **Description:** The “One Million Posts” corpus is an annotated data set consisting of user comments posted to an Austrian newspaper website (in German language). - **Paper:** https://dl.acm.org/doi/10.1145/3077136.3080711 - **Data:** https://github.com/OFAI/million-post-corpus - **Motivation:** Big German (real-life) dataset containing different annotations around forum moderation with expert annotations. ### Checkbox - [X] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [X] Fill the `_DESCRIPTION` and `_CITATION` variables - [X] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [X] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [X] Generate the metadata file `dataset_infos.json` for all configurations - [X] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [X] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [X] Both tests for the real data and the dummy data pass.
closed
https://github.com/huggingface/datasets/pull/1276
2020-12-08T00:50:08
2020-12-11T18:28:18
2020-12-11T18:28:18
{ "login": "aseifert", "id": 4944799, "type": "User" }
[]
true
[]
758,958,066
1,275
Yoruba GV NER added
I just added Yoruba GV NER dataset from this paper https://www.aclweb.org/anthology/2020.lrec-1.335/
closed
https://github.com/huggingface/datasets/pull/1275
2020-12-08T00:31:38
2020-12-08T23:25:28
2020-12-08T23:25:28
{ "login": "dadelani", "id": 23586676, "type": "User" }
[]
true
[]
758,943,174
1,274
oclar-dataset
Opinion Corpus for Lebanese Arabic Reviews (OCLAR) corpus is utilizable for Arabic sentiment classification on reviews, including hotels, restaurants, shops, and others. : [homepage](http://archive.ics.uci.edu/ml/datasets/Opinion+Corpus+for+Lebanese+Arabic+Reviews+%28OCLAR%29#)
closed
https://github.com/huggingface/datasets/pull/1274
2020-12-07T23:56:45
2020-12-09T15:36:08
2020-12-09T15:36:08
{ "login": "alaameloh", "id": 26907161, "type": "User" }
[]
true
[]
758,935,768
1,273
Created wiki_movies dataset.
First PR (ever). Hopefully this movies dataset is useful to others!
closed
https://github.com/huggingface/datasets/pull/1273
2020-12-07T23:38:54
2020-12-14T13:56:49
2020-12-14T13:56:49
{ "login": "aclifton314", "id": 53267795, "type": "User" }
[]
true
[]
758,924,960
1,272
Psc
closed
https://github.com/huggingface/datasets/pull/1272
2020-12-07T23:19:36
2020-12-07T23:48:05
2020-12-07T23:47:48
{ "login": "abecadel", "id": 1654113, "type": "User" }
[]
true
[]
758,924,203
1,271
SMS Spam Dataset
Hi :) I added this [SMS Spam Dataset](http://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection)
closed
https://github.com/huggingface/datasets/pull/1271
2020-12-07T23:18:06
2020-12-08T17:42:19
2020-12-08T17:42:19
{ "login": "czabo", "id": 75574105, "type": "User" }
[]
true
[]
758,917,216
1,270
add DFKI SmartData Corpus
- **Name:** DFKI SmartData Corpus - **Description:** DFKI SmartData Corpus is a dataset of 2598 German-language documents which has been annotated with fine-grained geo-entities, such as streets, stops and routes, as well as standard named entity types. - **Paper:** https://www.dfki.de/fileadmin/user_upload/import/9427_lrec_smartdata_corpus.pdf - **Data:** https://github.com/DFKI-NLP/smartdata-corpus - **Motivation:** Contains fine-grained NER labels for German. ### Checkbox - [X] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [X] Fill the `_DESCRIPTION` and `_CITATION` variables - [X] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [X] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [X] Generate the metadata file `dataset_infos.json` for all configurations - [X] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [X] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [X] Both tests for the real data and the dummy data pass.
closed
https://github.com/huggingface/datasets/pull/1270
2020-12-07T23:03:48
2020-12-08T17:41:23
2020-12-08T17:41:23
{ "login": "aseifert", "id": 4944799, "type": "User" }
[]
true
[]
758,886,174
1,269
Adding OneStopEnglish corpus dataset
This PR adds OneStopEnglish Corpus containing texts classified into reading levels (elementary, intermediate, advance) for automatic readability assessment and text simplification. Link to the paper: https://www.aclweb.org/anthology/W18-0535.pdf
closed
https://github.com/huggingface/datasets/pull/1269
2020-12-07T22:05:11
2020-12-09T18:43:38
2020-12-09T15:33:53
{ "login": "purvimisal", "id": 22298787, "type": "User" }
[]
true
[]
758,871,252
1,268
new pr for Turkish NER
closed
https://github.com/huggingface/datasets/pull/1268
2020-12-07T21:40:26
2020-12-09T13:45:05
2020-12-09T13:45:05
{ "login": "merveenoyan", "id": 53175384, "type": "User" }
[]
true
[]
758,826,568
1,267
Has part
closed
https://github.com/huggingface/datasets/pull/1267
2020-12-07T20:32:03
2020-12-11T18:25:42
2020-12-11T18:25:42
{ "login": "jeromeku", "id": 2455711, "type": "User" }
[]
true
[]
758,704,178
1,266
removing unzipped hansards dummy data
which were added by mistake
closed
https://github.com/huggingface/datasets/pull/1266
2020-12-07T17:31:16
2020-12-07T17:32:29
2020-12-07T17:32:29
{ "login": "yjernite", "id": 10469459, "type": "User" }
[]
true
[]
758,687,223
1,265
Add CovidQA dataset
This PR adds CovidQA, a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered from Kaggle’s COVID-19 Open Research Dataset Challenge. Link to the paper: https://arxiv.org/pdf/2004.11339.pdf Link to the homepage: https://covidqa.ai
closed
https://github.com/huggingface/datasets/pull/1265
2020-12-07T17:06:51
2020-12-08T17:02:26
2020-12-08T17:02:26
{ "login": "olinguyen", "id": 4341867, "type": "User" }
[]
true
[]
758,686,474
1,264
enriched webnlg dataset rebase
Rebase of #1206 !
closed
https://github.com/huggingface/datasets/pull/1264
2020-12-07T17:05:45
2020-12-09T17:00:29
2020-12-09T17:00:27
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[]
true
[]
758,663,787
1,263
Added kannada news headlines classification dataset.
Manual Download of a kaggle dataset. Mostly followed process as ms_terms.
closed
https://github.com/huggingface/datasets/pull/1263
2020-12-07T16:35:37
2020-12-10T14:30:55
2020-12-09T18:01:31
{ "login": "vrindaprabhu", "id": 16264631, "type": "User" }
[]
true
[]
758,637,124
1,262
Adding msr_genomics_kbcomp dataset
closed
https://github.com/huggingface/datasets/pull/1262
2020-12-07T16:01:30
2020-12-08T18:08:55
2020-12-08T18:08:47
{ "login": "manandey", "id": 6687858, "type": "User" }
[]
true
[]
758,626,112
1,261
Add Google Sentence Compression dataset
For more information: https://www.aclweb.org/anthology/D13-1155.pdf
closed
https://github.com/huggingface/datasets/pull/1261
2020-12-07T15:47:43
2020-12-08T17:01:59
2020-12-08T17:01:59
{ "login": "mattbui", "id": 46804938, "type": "User" }
[]
true
[]
758,601,828
1,260
Added NewsPH Raw Dataset
Added the raw version of the NewsPH dataset, which was used to automatically generate the NewsPH-NLI corpus. Dataset of news articles in Filipino from mainstream Philippine news sites on the internet. Can be used as a language modeling dataset or to reproduce the NewsPH-NLI dataset. Paper: https://arxiv.org/abs/2010.11574 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
closed
https://github.com/huggingface/datasets/pull/1260
2020-12-07T15:17:53
2020-12-08T16:27:15
2020-12-08T16:27:15
{ "login": "jcblaisecruz02", "id": 24757547, "type": "User" }
[]
true
[]
758,565,320
1,259
Add KorQPair dataset
This PR adds a [Korean paired question dataset](https://github.com/songys/Question_pair) containing labels indicating whether two questions in a given pair are semantically identical. This dataset was used to evaluate the performance of [KoGPT2](https://github.com/SKT-AI/KoGPT2#subtask-evaluations) on a phrase detection downstream task.
closed
https://github.com/huggingface/datasets/pull/1259
2020-12-07T14:33:57
2021-12-29T00:49:40
2020-12-08T15:11:41
{ "login": "jaketae", "id": 25360440, "type": "User" }
[]
true
[]
758,557,169
1,258
arXiv dataset added
closed
https://github.com/huggingface/datasets/pull/1258
2020-12-07T14:23:33
2020-12-08T14:07:15
2020-12-08T14:07:15
{ "login": "tanmoyio", "id": 33005287, "type": "User" }
[]
true
[]
758,550,490
1,257
Add Swahili news classification dataset
Add Swahili news classification dataset
closed
https://github.com/huggingface/datasets/pull/1257
2020-12-07T14:15:13
2020-12-08T14:44:19
2020-12-08T14:44:19
{ "login": "yvonnegitau", "id": 7923902, "type": "User" }
[]
true
[]
758,531,980
1,256
adding LiMiT dataset
Adding LiMiT: The Literal Motion in Text Dataset https://github.com/ilmgut/limit_dataset
closed
https://github.com/huggingface/datasets/pull/1256
2020-12-07T14:00:41
2020-12-08T14:58:28
2020-12-08T14:42:51
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]
758,530,243
1,255
[doc] nlp/viewer ➡️datasets/viewer
cc @srush
closed
https://github.com/huggingface/datasets/pull/1255
2020-12-07T13:58:41
2020-12-08T17:17:54
2020-12-08T17:17:53
{ "login": "julien-c", "id": 326577, "type": "User" }
[]
true
[]
758,518,774
1,254
Added WikiText-TL-39
This PR adds the WikiText-TL-39 Filipino Language Modeling dataset. Paper: https://arxiv.org/abs/1907.00409 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
closed
https://github.com/huggingface/datasets/pull/1254
2020-12-07T13:43:48
2020-12-08T16:00:58
2020-12-08T16:00:58
{ "login": "jcblaisecruz02", "id": 24757547, "type": "User" }
[]
true
[]
758,517,391
1,253
add thainer
ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence [unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by [Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/). It is used to train NER taggers in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp). The NER tags are annotated by [Tirasaroj and Aroonmanakun (2012)]((http://pioneer.chula.ac.th/~awirote/publications/)) for 2,258 sentences and the rest by [@wannaphong](https://github.com/wannaphong/). The POS tags are done by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s `perceptron` engine trained on `orchid_ud`. [@wannaphong](https://github.com/wannaphong/) is now the only maintainer of this dataset.
closed
https://github.com/huggingface/datasets/pull/1253
2020-12-07T13:41:54
2020-12-08T14:44:49
2020-12-08T14:44:49
{ "login": "cstorm125", "id": 15519308, "type": "User" }
[]
true
[]
758,511,388
1,252
Add Naver sentiment movie corpus
Supersedes #1168 > This PR adds the [Naver sentiment movie corpus](https://github.com/e9t/nsmc), a dataset containing Korean movie reviews from Naver, the most commonly used search engine in Korea. This dataset is often used to benchmark models on Korean NLP tasks, as seen in [this paper](https://www.aclweb.org/anthology/2020.lrec-1.199.pdf).
closed
https://github.com/huggingface/datasets/pull/1252
2020-12-07T13:33:45
2020-12-08T14:32:33
2020-12-08T14:21:37
{ "login": "jaketae", "id": 25360440, "type": "User" }
[]
true
[]
758,503,689
1,251
Add Wiki Atomic Edits Dataset (43M edits)
closed
https://github.com/huggingface/datasets/pull/1251
2020-12-07T13:23:08
2020-12-14T10:05:01
2020-12-14T10:05:00
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
758,491,704
1,250
added Nergrit dataset
Nergrit Corpus is a dataset collection for Indonesian Named Entity Recognition, Statement Extraction, and Sentiment Analysis. This PR is only for the Named Entity Recognition.
closed
https://github.com/huggingface/datasets/pull/1250
2020-12-07T13:06:12
2020-12-08T14:33:29
2020-12-08T14:33:29
{ "login": "cahya-wirawan", "id": 7669893, "type": "User" }
[]
true
[]
758,472,863
1,249
Add doc2dial dataset
### Doc2dial: A Goal-Oriented Document-Grounded Dialogue Dataset v0.9 Once complete this will add the [Doc2dial](https://doc2dial.github.io/data.html) dataset from the generic data sets list.
closed
https://github.com/huggingface/datasets/pull/1249
2020-12-07T12:39:09
2020-12-14T16:17:14
2020-12-14T16:17:14
{ "login": "KMFODA", "id": 35491698, "type": "User" }
[]
true
[]
758,454,438
1,248
Update step-by-step guide about the dataset cards
Small update in the step-by-step guide about the dataset cards to indicate it can be created and completing while exploring the dataset.
closed
https://github.com/huggingface/datasets/pull/1248
2020-12-07T12:12:12
2020-12-07T13:19:24
2020-12-07T13:19:23
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
758,431,640
1,247
Adding indonlu dataset
IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for Bahasa Indonesia. It contains 12 datasets.
closed
https://github.com/huggingface/datasets/pull/1247
2020-12-07T11:38:45
2020-12-08T14:11:50
2020-12-08T14:11:50
{ "login": "yasirabd", "id": 6518504, "type": "User" }
[]
true
[]
758,418,652
1,246
arXiv dataset added
closed
https://github.com/huggingface/datasets/pull/1246
2020-12-07T11:20:23
2020-12-07T14:22:58
2020-12-07T14:22:58
{ "login": "tanmoyio", "id": 33005287, "type": "User" }
[]
true
[]
758,411,233
1,245
Add Google Turkish Treebank Dataset
null
closed
https://github.com/huggingface/datasets/pull/1245
2020-12-07T11:09:17
2023-09-24T09:40:49
2022-10-03T09:39:32
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
758,384,417
1,244
arxiv dataset added
closed
https://github.com/huggingface/datasets/pull/1244
2020-12-07T10:32:54
2020-12-07T11:04:23
2020-12-07T11:04:23
{ "login": "tanmoyio", "id": 33005287, "type": "User" }
[]
true
[]
758,378,904
1,243
Add Google Noun Verb Dataset
null
closed
https://github.com/huggingface/datasets/pull/1243
2020-12-07T10:26:05
2023-09-24T09:40:54
2022-10-03T09:39:37
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
758,370,579
1,242
adding bprec
closed
https://github.com/huggingface/datasets/pull/1242
2020-12-07T10:15:49
2020-12-08T14:33:49
2020-12-08T14:33:48
{ "login": "kldarek", "id": 15803781, "type": "User" }
[]
true
[]
758,360,643
1,241
Opus elhuyar dataset for MT task having languages pair in Spanish to Basque
Opus elhuyar dataset for MT task having languages pair in Spanish to Basque More info : http://opus.nlpl.eu/Elhuyar.php
closed
https://github.com/huggingface/datasets/pull/1241
2020-12-07T10:03:34
2020-12-19T14:55:12
2020-12-09T15:12:48
{ "login": "spatil6", "id": 6419011, "type": "User" }
[]
true
[]
758,355,523
1,240
Multi Domain Sentiment Analysis Dataset (MDSA)
null
closed
https://github.com/huggingface/datasets/pull/1240
2020-12-07T09:57:15
2023-09-24T09:40:59
2022-10-03T09:39:43
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
758,339,593
1,239
add yelp_review_full dataset
This corresponds to the Yelp-5 requested in https://github.com/huggingface/datasets/issues/353
closed
https://github.com/huggingface/datasets/pull/1239
2020-12-07T09:35:36
2020-12-08T15:43:24
2020-12-08T15:00:50
{ "login": "hfawaz", "id": 29229602, "type": "User" }
[]
true
[]
758,321,688
1,238
adding poem_sentiment
Adding poem_sentiment dataset. https://github.com/google-research-datasets/poem-sentiment
closed
https://github.com/huggingface/datasets/pull/1238
2020-12-07T09:11:52
2020-12-09T16:36:10
2020-12-09T16:02:45
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]
758,318,353
1,237
Add AmbigQA dataset
# AmbigQA: Answering Ambiguous Open-domain Questions Dataset Adding the [AmbigQA](https://nlp.cs.washington.edu/ambigqa/) dataset as part of the sprint 🎉 (from Open dataset list for Dataset sprint) Added both the light and full versions (as seen on the dataset homepage) The json format changes based on the value of one 'type' field, so I set the unavailable field to an empty list. This is explained in the README -> Data Fields ```py train_light_dataset = load_dataset('./datasets/ambig_qa',"light",split="train") val_light_dataset = load_dataset('./datasets/ambig_qa',"light",split="validation") train_full_dataset = load_dataset('./datasets/ambig_qa',"full",split="train") val_full_dataset = load_dataset('./datasets/ambig_qa',"full",split="validation") for example in train_light_dataset: for i,t in enumerate(example['annotations']['type']): if t =='singleAnswer': # use the example['annotations']['answer'][i] # example['annotations']['qaPairs'][i] - > is [] print(example['annotations']['answer'][i]) else: # use the example['annotations']['qaPairs'][i] # example['annotations']['answer'][i] - > is [] print(example['annotations']['qaPairs'][i]) ``` - [x] All tests passed - [x] Added dummy data - [x] Added data card (as much as I could)
closed
https://github.com/huggingface/datasets/pull/1237
2020-12-07T09:07:19
2020-12-08T13:38:52
2020-12-08T13:38:52
{ "login": "cceyda", "id": 15624271, "type": "User" }
[]
true
[]
758,263,012
1,236
Opus finlex dataset of language pair Finnish and Swedish
Added Opus_finlex dataset of language pair Finnish and Swedish More info : http://opus.nlpl.eu/Finlex.php
closed
https://github.com/huggingface/datasets/pull/1236
2020-12-07T07:53:57
2020-12-08T13:30:33
2020-12-08T13:30:33
{ "login": "spatil6", "id": 6419011, "type": "User" }
[]
true
[]
758,234,511
1,235
Wino bias
The PR will fail circleCi tests because of the requirement of manual loading of data. Fresh PR because of messed up history of the previous one.
closed
https://github.com/huggingface/datasets/pull/1235
2020-12-07T07:12:42
2020-12-10T20:48:12
2020-12-10T20:48:01
{ "login": "akshayb7", "id": 29649801, "type": "User" }
[]
true
[]