id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
764,977,542
1,535
Adding Igbo monolingual dataset
This PR adds the Igbo Monolingual dataset. Data: https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_monoling Paper: https://arxiv.org/abs/2004.00648
closed
https://github.com/huggingface/datasets/pull/1535
2020-12-13T05:16:37
2020-12-21T14:39:49
2020-12-21T14:39:49
{ "login": "purvimisal", "id": 22298787, "type": "User" }
[]
true
[]
764,934,681
1,534
adding dataset for diplomacy detection
closed
https://github.com/huggingface/datasets/pull/1534
2020-12-13T04:38:43
2020-12-15T19:52:52
2020-12-15T19:52:25
{ "login": "MisbahKhan789", "id": 15351802, "type": "User" }
[]
true
[]
764,835,913
1,533
add id_panl_bppt, a parallel corpus for en-id
Parallel Text Corpora for English - Indonesian
closed
https://github.com/huggingface/datasets/pull/1533
2020-12-13T03:11:27
2020-12-21T10:40:36
2020-12-21T10:40:36
{ "login": "cahya-wirawan", "id": 7669893, "type": "User" }
[]
true
[]
764,772,184
1,532
adding hate-speech-and-offensive-language
closed
https://github.com/huggingface/datasets/pull/1532
2020-12-13T02:16:31
2020-12-17T18:36:54
2020-12-17T18:10:05
{ "login": "MisbahKhan789", "id": 15351802, "type": "User" }
[]
true
[]
764,752,882
1,531
adding hate-speech-and-offensive-language
closed
https://github.com/huggingface/datasets/pull/1531
2020-12-13T01:59:07
2020-12-13T02:17:02
2020-12-13T02:17:02
{ "login": "MisbahKhan789", "id": 15351802, "type": "User" }
[]
true
[]
764,749,507
1,530
add indonlu benchmark datasets
The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for the Indonesian language. There are 12 datasets in IndoNLU. This is a new clean PR from [#1322](https://github.com/huggingface/datasets/pull/1322)
closed
https://github.com/huggingface/datasets/pull/1530
2020-12-13T01:56:09
2020-12-16T11:11:43
2020-12-16T11:11:43
{ "login": "yasirabd", "id": 6518504, "type": "User" }
[]
true
[]
764,748,410
1,529
Ro sent
Movies reviews dataset for Romanian language.
closed
https://github.com/huggingface/datasets/pull/1529
2020-12-13T01:55:02
2021-03-19T10:32:43
2021-03-19T10:32:42
{ "login": "iliemihai", "id": 2815308, "type": "User" }
[]
true
[]
764,724,035
1,528
initial commit for Common Crawl Domain Names
closed
https://github.com/huggingface/datasets/pull/1528
2020-12-13T01:32:49
2020-12-18T13:51:38
2020-12-18T10:22:32
{ "login": "Karthik-Bhaskar", "id": 13200370, "type": "User" }
[]
true
[]
764,638,504
1,527
Add : Conv AI 2 (Messed up original PR)
@lhoestq Sorry I messed up the previous 2 PR's -> https://github.com/huggingface/datasets/pull/1462 -> https://github.com/huggingface/datasets/pull/1383. So created a new one. Also, everything is fixed in this PR. Can you please review it ? Thanks in advance.
closed
https://github.com/huggingface/datasets/pull/1527
2020-12-13T00:21:14
2020-12-13T19:14:24
2020-12-13T19:14:24
{ "login": "rkc007", "id": 22396042, "type": "User" }
[]
true
[]
764,591,243
1,526
added Hebrew thisworld corpus
added corpus from https://thisworld.online/ , https://github.com/thisworld1/thisworld.online
closed
https://github.com/huggingface/datasets/pull/1526
2020-12-12T23:42:52
2020-12-18T10:47:30
2020-12-18T10:47:30
{ "login": "imvladikon", "id": 10088963, "type": "User" }
[]
true
[]
764,530,582
1,525
Adding a second branch for Atomic to fix git errors
Adding the Atomic common sense dataset. See https://homes.cs.washington.edu/~msap/atomic/
closed
https://github.com/huggingface/datasets/pull/1525
2020-12-12T22:54:50
2020-12-28T15:51:11
2020-12-28T15:51:11
{ "login": "huu4ontocord", "id": 8900094, "type": "User" }
[]
true
[]
764,521,672
1,524
ADD: swahili dataset for language modeling
Add a corpus for Swahili language modelling. All tests passed locally. README updated with all information available.
closed
https://github.com/huggingface/datasets/pull/1524
2020-12-12T22:47:18
2020-12-17T16:37:16
2020-12-17T16:37:16
{ "login": "akshayb7", "id": 29649801, "type": "User" }
[]
true
[]
764,359,524
1,523
Add eHealth Knowledge Discovery dataset
This Spanish dataset can be used to mine knowledge from unstructured health texts. In particular, for: - Entity recognition - Relation extraction
closed
https://github.com/huggingface/datasets/pull/1523
2020-12-12T20:44:18
2020-12-17T17:02:41
2020-12-17T16:48:56
{ "login": "mariagrandury", "id": 57645283, "type": "User" }
[]
true
[]
764,341,594
1,522
Add semeval 2020 task 11
Adding in propaganda detection task (task 11) from Sem Eval 2020
closed
https://github.com/huggingface/datasets/pull/1522
2020-12-12T20:32:14
2020-12-15T16:48:52
2020-12-15T16:48:52
{ "login": "ZacharySBrown", "id": 7950786, "type": "User" }
[]
true
[]
764,320,841
1,521
Atomic
This is the ATOMIC common sense dataset. More info can be found here: * README.md still to be created.
closed
https://github.com/huggingface/datasets/pull/1521
2020-12-12T20:18:08
2020-12-12T22:56:48
2020-12-12T22:56:48
{ "login": "huu4ontocord", "id": 8900094, "type": "User" }
[]
true
[]
764,140,938
1,520
ru_reviews dataset adding
RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian
closed
https://github.com/huggingface/datasets/pull/1520
2020-12-12T18:13:06
2022-10-03T09:38:42
2022-10-03T09:38:42
{ "login": "darshan-gandhi", "id": 44197177, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
764,107,360
1,519
Initial commit for AQuaMuSe
There is an issue in generation of dummy data. Tests on real data have passed locally.
closed
https://github.com/huggingface/datasets/pull/1519
2020-12-12T17:46:16
2020-12-18T13:50:42
2020-12-17T17:03:30
{ "login": "Karthik-Bhaskar", "id": 13200370, "type": "User" }
[]
true
[]
764,045,722
1,518
Add twi text
Add Twi texts
closed
https://github.com/huggingface/datasets/pull/1518
2020-12-12T16:52:02
2020-12-13T18:53:37
2020-12-13T18:53:37
{ "login": "dadelani", "id": 23586676, "type": "User" }
[]
true
[]
764,045,214
1,517
Kd conv smangrul
closed
https://github.com/huggingface/datasets/pull/1517
2020-12-12T16:51:30
2020-12-16T14:56:14
2020-12-16T14:56:14
{ "login": "pacman100", "id": 13534540, "type": "User" }
[]
true
[]
764,032,327
1,516
adding wrbsc
closed
https://github.com/huggingface/datasets/pull/1516
2020-12-12T16:38:40
2020-12-18T09:41:33
2020-12-18T09:41:33
{ "login": "kldarek", "id": 15803781, "type": "User" }
[]
true
[]
764,022,753
1,515
Add yoruba text
Adding Yoruba text C3
closed
https://github.com/huggingface/datasets/pull/1515
2020-12-12T16:29:30
2020-12-13T18:37:58
2020-12-13T18:37:58
{ "login": "dadelani", "id": 23586676, "type": "User" }
[]
true
[]
764,017,148
1,514
how to get all the options of a property in datasets
Hi could you tell me how I can get all unique options of a property of dataset? for instance in case of boolq, if the user wants to know which unique labels it has, is there a way to access unique labels without getting all training data lables and then forming a set i mean? thanks
closed
https://github.com/huggingface/datasets/issues/1514
2020-12-12T16:24:08
2022-05-25T16:27:29
2022-05-25T16:27:29
{ "login": "rabeehk", "id": 6278280, "type": "User" }
[ { "name": "question", "color": "d876e3" } ]
false
[]
764,016,850
1,513
app_reviews_by_users
Software Applications User Reviews
closed
https://github.com/huggingface/datasets/pull/1513
2020-12-12T16:23:49
2020-12-14T20:45:24
2020-12-14T20:45:24
{ "login": "darshan-gandhi", "id": 44197177, "type": "User" }
[]
true
[]
764,010,722
1,512
Add Hippocorpus Dataset
closed
https://github.com/huggingface/datasets/pull/1512
2020-12-12T16:17:53
2020-12-13T05:09:08
2020-12-13T05:08:58
{ "login": "manandey", "id": 6687858, "type": "User" }
[]
true
[]
764,006,477
1,511
poleval cyberbullying
closed
https://github.com/huggingface/datasets/pull/1511
2020-12-12T16:13:44
2020-12-17T16:20:59
2020-12-17T16:19:58
{ "login": "czabo", "id": 75574105, "type": "User" }
[]
true
[]
763,980,369
1,510
Add Dataset for (qa_srl)Question-Answer Driven Semantic Role Labeling
- Added tags, Readme file - Added code changes
closed
https://github.com/huggingface/datasets/pull/1510
2020-12-12T15:48:11
2020-12-17T16:06:22
2020-12-17T16:06:22
{ "login": "bpatidar", "id": 12439573, "type": "User" }
[]
true
[]
763,964,857
1,509
Added dataset Makhzan
Need help with the dummy data.
closed
https://github.com/huggingface/datasets/pull/1509
2020-12-12T15:34:07
2020-12-16T15:04:52
2020-12-16T15:04:52
{ "login": "arkhalid", "id": 14899066, "type": "User" }
[]
true
[]
763,908,724
1,508
Fix namedsplit docs
Fixes a broken link and `DatasetInfoMixin.split`'s docstring.
closed
https://github.com/huggingface/datasets/pull/1508
2020-12-12T14:43:38
2021-03-11T02:18:39
2020-12-15T12:57:48
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
763,857,872
1,507
Add SelQA Dataset
Add the SelQA Dataset, a new benchmark for selection-based question answering tasks Repo: https://github.com/emorynlp/selqa/ Paper: https://arxiv.org/pdf/1606.08513.pdf
closed
https://github.com/huggingface/datasets/pull/1507
2020-12-12T13:58:07
2020-12-16T16:49:23
2020-12-16T16:49:23
{ "login": "bharatr21", "id": 13381361, "type": "User" }
[]
true
[]
763,846,074
1,506
Add nq_open question answering dataset
Added nq_open Open-domain question answering dataset. The NQ-Open task is currently being used to evaluate submissions to the EfficientQA competition, which is part of the NeurIPS 2020 competition track.
closed
https://github.com/huggingface/datasets/pull/1506
2020-12-12T13:46:48
2020-12-17T15:34:50
2020-12-17T15:34:50
{ "login": "Nilanshrajput", "id": 28673745, "type": "User" }
[]
true
[]
763,750,773
1,505
add ilist dataset
This PR will add Indo-Aryan Language Identification Shared Task Dataset.
closed
https://github.com/huggingface/datasets/pull/1505
2020-12-12T12:44:12
2020-12-17T15:43:07
2020-12-17T15:43:07
{ "login": "thevasudevgupta", "id": 53136577, "type": "User" }
[]
true
[]
763,697,231
1,504
Add SentiWS dataset for pos-tagging and sentiment-scoring (German)
closed
https://github.com/huggingface/datasets/pull/1504
2020-12-12T12:17:53
2020-12-15T18:32:38
2020-12-15T18:32:38
{ "login": "harshalmittal4", "id": 24206326, "type": "User" }
[]
true
[]
763,667,489
1,503
Adding COVID QA dataset in Chinese and English from UC SanDiego
closed
https://github.com/huggingface/datasets/pull/1503
2020-12-12T12:02:48
2021-02-16T05:29:18
2020-12-17T15:29:26
{ "login": "vrindaprabhu", "id": 16264631, "type": "User" }
[]
true
[]
763,658,208
1,502
Add Senti_Lex Dataset
TODO: Fix feature format issue Create dataset_info.json file Run pytests Make Style
closed
https://github.com/huggingface/datasets/pull/1502
2020-12-12T11:55:29
2020-12-28T14:01:12
2020-12-28T14:01:12
{ "login": "KMFODA", "id": 35491698, "type": "User" }
[]
true
[]
763,517,647
1,501
Adds XED dataset
closed
https://github.com/huggingface/datasets/pull/1501
2020-12-12T09:47:00
2020-12-14T21:20:59
2020-12-14T21:20:59
{ "login": "harshalmittal4", "id": 24206326, "type": "User" }
[]
true
[]
763,479,305
1,500
adding polsum
closed
https://github.com/huggingface/datasets/pull/1500
2020-12-12T09:05:29
2020-12-18T09:43:43
2020-12-18T09:43:43
{ "login": "kldarek", "id": 15803781, "type": "User" }
[]
true
[]
763,464,693
1,499
update the dataset id_newspapers_2018
Hi, I need to update the link to the dataset. The link in the previous PR was to a small test dataset. Thanks
closed
https://github.com/huggingface/datasets/pull/1499
2020-12-12T08:47:12
2020-12-14T15:28:07
2020-12-14T15:28:07
{ "login": "cahya-wirawan", "id": 7669893, "type": "User" }
[]
true
[]
763,303,606
1,498
add stereoset
StereoSet is a dataset that measures stereotype bias in language models. StereoSet consists of 17,000 sentences that measures model preferences across gender, race, religion, and profession.
closed
https://github.com/huggingface/datasets/pull/1498
2020-12-12T05:04:37
2020-12-18T10:03:53
2020-12-18T10:03:53
{ "login": "cstorm125", "id": 15519308, "type": "User" }
[]
true
[]
763,180,824
1,497
adding fake-news-english-5
closed
https://github.com/huggingface/datasets/pull/1497
2020-12-12T02:13:11
2020-12-17T20:07:17
2020-12-17T20:07:17
{ "login": "MisbahKhan789", "id": 15351802, "type": "User" }
[]
true
[]
763,091,663
1,496
Add Multi-Dimensional Gender Bias classification data
https://parl.ai/projects/md_gender/ Mostly has the ABOUT dimension since the others are inferred from other datasets in most cases. I tried to keep the dummy data small but one of the configs has 140 splits ( > 56KB data)
closed
https://github.com/huggingface/datasets/pull/1496
2020-12-12T00:17:37
2020-12-14T21:14:55
2020-12-14T21:14:55
{ "login": "yjernite", "id": 10469459, "type": "User" }
[]
true
[]
763,025,562
1,495
Opus DGT added
Dataset : http://opus.nlpl.eu/DGT.php
closed
https://github.com/huggingface/datasets/pull/1495
2020-12-11T23:05:09
2020-12-17T14:38:41
2020-12-17T14:38:41
{ "login": "rkc007", "id": 22396042, "type": "User" }
[]
true
[]
762,992,601
1,494
Added Opus Wikipedia
Dataset : http://opus.nlpl.eu/Wikipedia.php
closed
https://github.com/huggingface/datasets/pull/1494
2020-12-11T22:28:03
2020-12-17T14:38:28
2020-12-17T14:38:28
{ "login": "rkc007", "id": 22396042, "type": "User" }
[]
true
[]
762,979,415
1,493
Added RONEC dataset.
closed
https://github.com/huggingface/datasets/pull/1493
2020-12-11T22:14:50
2020-12-21T14:48:56
2020-12-21T14:48:56
{ "login": "iliemihai", "id": 2815308, "type": "User" }
[]
true
[]
762,965,239
1,492
OPUS UBUNTU dataset
Dataset : http://opus.nlpl.eu/Ubuntu.php
closed
https://github.com/huggingface/datasets/pull/1492
2020-12-11T22:01:37
2020-12-17T14:38:16
2020-12-17T14:38:15
{ "login": "rkc007", "id": 22396042, "type": "User" }
[]
true
[]
762,920,920
1,491
added opus GNOME data
Dataset : http://opus.nlpl.eu/GNOME.php
closed
https://github.com/huggingface/datasets/pull/1491
2020-12-11T21:21:51
2020-12-17T14:20:23
2020-12-17T14:20:23
{ "login": "rkc007", "id": 22396042, "type": "User" }
[]
true
[]
762,915,346
1,490
ADD: opus_rf dataset for translation
Passed all local tests. Hopefully passes all Circle CI tests too. Tried to keep the commit history clean.
closed
https://github.com/huggingface/datasets/pull/1490
2020-12-11T21:16:43
2020-12-13T19:12:24
2020-12-13T19:12:24
{ "login": "akshayb7", "id": 29649801, "type": "User" }
[]
true
[]
762,908,763
1,489
Fake news english 4
closed
https://github.com/huggingface/datasets/pull/1489
2020-12-11T21:10:35
2020-12-12T19:39:52
2020-12-12T19:38:09
{ "login": "MisbahKhan789", "id": 15351802, "type": "User" }
[]
true
[]
762,860,679
1,488
Adding NELL
NELL is a knowledge base and knowledge graph along with sentences used to create the KB. See http://rtw.ml.cmu.edu/rtw/ for more details.
closed
https://github.com/huggingface/datasets/pull/1488
2020-12-11T20:25:25
2021-01-07T08:37:07
2020-12-21T14:45:00
{ "login": "huu4ontocord", "id": 8900094, "type": "User" }
[]
true
[]
762,794,921
1,487
added conv_ai_3 dataset
Dataset : https://github.com/aliannejadi/ClariQ/
closed
https://github.com/huggingface/datasets/pull/1487
2020-12-11T19:26:26
2020-12-28T09:38:40
2020-12-28T09:38:39
{ "login": "rkc007", "id": 22396042, "type": "User" }
[]
true
[]
762,790,102
1,486
hate speech 18 dataset
This is again a PR instead of #1339, because something went wrong there.
closed
https://github.com/huggingface/datasets/pull/1486
2020-12-11T19:22:14
2020-12-14T19:43:18
2020-12-14T19:43:18
{ "login": "czabo", "id": 75574105, "type": "User" }
[]
true
[]
762,774,822
1,485
Re-added wiki_movies dataset due to previous PR having changes from m…
…any other unassociated files.
closed
https://github.com/huggingface/datasets/pull/1485
2020-12-11T19:07:48
2020-12-14T14:08:22
2020-12-14T14:08:22
{ "login": "aclifton314", "id": 53267795, "type": "User" }
[]
true
[]
762,747,096
1,484
Add peer-read dataset
closed
https://github.com/huggingface/datasets/pull/1484
2020-12-11T18:43:44
2020-12-21T09:40:50
2020-12-21T09:40:50
{ "login": "vinaykudari", "id": 34424769, "type": "User" }
[]
true
[]
762,712,337
1,483
Added Times of India News Headlines Dataset
Dataset name: Times of India News Headlines link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/DPQMQH
closed
https://github.com/huggingface/datasets/pull/1483
2020-12-11T18:12:38
2020-12-14T18:08:08
2020-12-14T18:08:08
{ "login": "tanmoyio", "id": 33005287, "type": "User" }
[]
true
[]
762,686,820
1,482
Adding medical database chinese and english
Error in creating dummy dataset
closed
https://github.com/huggingface/datasets/pull/1482
2020-12-11T17:50:39
2021-02-16T05:28:36
2020-12-15T18:23:53
{ "login": "vrindaprabhu", "id": 16264631, "type": "User" }
[]
true
[]
762,579,658
1,481
Fix ADD_NEW_DATASET to avoid rebasing once pushed
closed
https://github.com/huggingface/datasets/pull/1481
2020-12-11T16:27:49
2021-01-07T10:10:20
2021-01-07T10:10:20
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
762,530,805
1,480
Adding the Mac-Morpho dataset
Adding the Mac-Morpho dataset, a Portuguese language dataset for Part-of-speech tagging tasks
closed
https://github.com/huggingface/datasets/pull/1480
2020-12-11T16:01:38
2020-12-21T10:03:37
2020-12-21T10:03:37
{ "login": "jonatasgrosman", "id": 5097052, "type": "User" }
[]
true
[]
762,320,736
1,479
Add narrativeQA
Redo of #1368 #309 #499 In redoing the dummy data a few times, I ended up adding a load of files to git. Hopefully this should work.
closed
https://github.com/huggingface/datasets/pull/1479
2020-12-11T12:58:31
2020-12-11T13:33:23
2020-12-11T13:33:23
{ "login": "ghomasHudson", "id": 13795113, "type": "User" }
[]
true
[]
762,293,076
1,478
Inconsistent argument names.
Just find it a wee bit odd that in the transformers library `predictions` are those made by the model: https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_utils.py#L51-L61 While in many datasets metrics they are the ground truth labels: https://github.com/huggingface/datasets/blob/c3f53792a744ede18d748a1133b6597fdd2d8d18/metrics/accuracy/accuracy.py#L31-L40 Do you think predictions & references should be swapped? I'd be willing to do some refactoring here if you agree.
closed
https://github.com/huggingface/datasets/issues/1478
2020-12-11T12:19:38
2020-12-19T15:03:39
2020-12-19T15:03:39
{ "login": "Fraser-Greenlee", "id": 8402500, "type": "User" }
[]
false
[]
762,288,811
1,477
Jigsaw toxicity pred
Managed to mess up my original pull request, opening a fresh one incorporating the changes suggested by @lhoestq.
closed
https://github.com/huggingface/datasets/pull/1477
2020-12-11T12:13:20
2020-12-14T13:19:35
2020-12-14T13:19:35
{ "login": "taihim", "id": 13764071, "type": "User" }
[]
true
[]
762,256,048
1,476
Add Spanish Billion Words Corpus
Add an unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web.
closed
https://github.com/huggingface/datasets/pull/1476
2020-12-11T11:24:58
2020-12-17T17:04:08
2020-12-14T13:14:31
{ "login": "mariagrandury", "id": 57645283, "type": "User" }
[]
true
[]
762,187,000
1,475
Fix XML iterparse in opus_dogc dataset
I forgot to add `elem.clear()` to clear the element from memory.
closed
https://github.com/huggingface/datasets/pull/1475
2020-12-11T10:08:18
2020-12-17T11:28:47
2020-12-17T11:28:46
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
762,083,706
1,474
Create JSON dummy data without loading all dataset in memory
See #1442. The statement `json.load()` loads **all the file content in memory**. In order to avoid this, file content should be parsed **iteratively**, by using the library `ijson` e.g. I have refactorized the code into a function `_create_json_dummy_data` and I have added some tests.
open
https://github.com/huggingface/datasets/pull/1474
2020-12-11T08:44:23
2022-07-06T15:19:47
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
762,055,694
1,473
add srwac
closed
https://github.com/huggingface/datasets/pull/1473
2020-12-11T08:20:29
2020-12-17T11:40:59
2020-12-17T11:40:59
{ "login": "IvanZidov", "id": 11391118, "type": "User" }
[]
true
[]
762,037,907
1,472
add Srwac
closed
https://github.com/huggingface/datasets/pull/1472
2020-12-11T08:04:57
2020-12-11T08:08:12
2020-12-11T08:05:54
{ "login": "IvanZidov", "id": 11391118, "type": "User" }
[]
true
[]
761,842,512
1,471
Adding the HAREM dataset
Adding the HAREM dataset, a Portuguese language dataset for NER tasks
closed
https://github.com/huggingface/datasets/pull/1471
2020-12-11T03:21:10
2020-12-22T10:37:33
2020-12-22T10:37:33
{ "login": "jonatasgrosman", "id": 5097052, "type": "User" }
[]
true
[]
761,791,065
1,470
Add wiki lingua dataset
Hello @lhoestq , I am opening a fresh pull request as advised in my original PR https://github.com/huggingface/datasets/pull/1308 Thanks
closed
https://github.com/huggingface/datasets/pull/1470
2020-12-11T02:04:18
2020-12-16T15:27:13
2020-12-16T15:27:13
{ "login": "katnoria", "id": 7674948, "type": "User" }
[]
true
[]
761,611,315
1,469
ADD: Wino_bias dataset
Updated PR to counter messed up history of previous one (https://github.com/huggingface/datasets/pull/1235) due to rebase. Removed manual downloading of dataset.
closed
https://github.com/huggingface/datasets/pull/1469
2020-12-10T20:59:45
2020-12-13T19:13:57
2020-12-13T19:13:57
{ "login": "akshayb7", "id": 29649801, "type": "User" }
[]
true
[]
761,607,531
1,468
add Indonesian newspapers (id_newspapers_2018)
The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers. The size of uncompressed 500K json files (newspapers-json.tgz) is around 2.2GB.
closed
https://github.com/huggingface/datasets/pull/1468
2020-12-10T20:54:12
2020-12-12T08:50:51
2020-12-11T17:04:41
{ "login": "cahya-wirawan", "id": 7669893, "type": "User" }
[]
true
[]
761,557,290
1,467
adding snow_simplified_japanese_corpus
Adding simplified Japanese corpus "SNOW T15" and "SNOW T23". They contain original Japanese, simplified Japanese, and original English (the original text is gotten from en-ja translation corpus). Hence, it can be used not only for Japanese simplification but also for en-ja translation. - http://www.jnlp.org/SNOW/T15 - http://www.jnlp.org/SNOW/T23
closed
https://github.com/huggingface/datasets/pull/1467
2020-12-10T19:45:03
2020-12-17T13:22:48
2020-12-17T11:25:34
{ "login": "forest1988", "id": 2755894, "type": "User" }
[]
true
[]
761,554,357
1,466
Add Turkish News Category Dataset (270K).Updates were made for review…
This PR adds the **Turkish News Categories Dataset (270K)** dataset which is a text classification dataset by me and @yavuzKomecoglu. Turkish news dataset consisting of **273601 news in 17 categories**, compiled from printed media and news websites between 2010 and 2017 by the [Interpress](https://www.interpress.com/) media monitoring company. **Note**: Resubmitted as a clean version of the previous Pull Request(#1419). @SBrandeis @lhoestq
closed
https://github.com/huggingface/datasets/pull/1466
2020-12-10T19:41:12
2020-12-11T14:27:15
2020-12-11T14:27:15
{ "login": "basakbuluz", "id": 41359672, "type": "User" }
[]
true
[]
761,538,931
1,465
Add clean menyo20k data
New Clean PR for menyo20k_mt
closed
https://github.com/huggingface/datasets/pull/1465
2020-12-10T19:22:00
2020-12-14T10:30:21
2020-12-14T10:30:21
{ "login": "yvonnegitau", "id": 7923902, "type": "User" }
[]
true
[]
761,533,566
1,464
Reddit jokes
196k Reddit Jokes dataset Dataset link- https://raw.githubusercontent.com/taivop/joke-dataset/master/reddit_jokes.json
closed
https://github.com/huggingface/datasets/pull/1464
2020-12-10T19:15:19
2020-12-10T20:14:00
2020-12-10T20:14:00
{ "login": "tanmoyio", "id": 33005287, "type": "User" }
[]
true
[]
761,510,908
1,463
Adding enriched_web_nlg features + handling xml bugs
This PR adds features of the enriched_web_nlg dataset that were not present yet (most notably sorted rdf triplet sets), and deals with some xml issues that led to returning no data in cases where surgery could be performed to salvage it.
closed
https://github.com/huggingface/datasets/pull/1463
2020-12-10T18:48:19
2020-12-17T10:44:35
2020-12-17T10:44:34
{ "login": "TevenLeScao", "id": 26709476, "type": "User" }
[]
true
[]
761,489,274
1,462
Added conv ai 2 (Again)
The original PR -> https://github.com/huggingface/datasets/pull/1383 Reason for creating again - The reason I had to create the PR again was due to the master rebasing issue. After rebasing the changes, all the previous commits got added to the branch.
closed
https://github.com/huggingface/datasets/pull/1462
2020-12-10T18:21:55
2020-12-13T00:21:32
2020-12-13T00:21:31
{ "login": "rkc007", "id": 22396042, "type": "User" }
[]
true
[]
761,415,420
1,461
Adding NewsQA dataset
Since the dataset has legal restrictions to circulate the original data. It has to be manually downloaded by the user and loaded to the library.
closed
https://github.com/huggingface/datasets/pull/1461
2020-12-10T17:01:10
2020-12-17T18:29:03
2020-12-17T18:27:36
{ "login": "rsanjaykamath", "id": 18527321, "type": "User" }
[]
true
[]
761,349,149
1,460
add Bengali Hate Speech dataset
closed
https://github.com/huggingface/datasets/pull/1460
2020-12-10T15:40:55
2021-09-17T16:54:53
2021-01-04T14:08:29
{ "login": "stevhliu", "id": 59462357, "type": "User" }
[]
true
[]
761,258,395
1,459
Add Google Conceptual Captions Dataset
closed
https://github.com/huggingface/datasets/pull/1459
2020-12-10T13:50:33
2022-04-14T13:14:19
2022-04-14T13:07:49
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
761,235,962
1,458
Add id_nergrit_corpus
Nergrit Corpus is a dataset collection of Indonesian Named Entity Recognition, Statement Extraction, and Sentiment Analysis. Recently my PR for id_nergrit_ner has been accepted and merged to the main branch. The id_nergrit_ner has only one dataset (NER), and this new PR renamed the dataset from id_nergrit_ner to id_nergrit_corpus and added 2 other remaining datasets (Statement Extraction, and Sentiment Analysis.)
closed
https://github.com/huggingface/datasets/pull/1458
2020-12-10T13:20:34
2020-12-17T10:45:15
2020-12-17T10:45:15
{ "login": "cahya-wirawan", "id": 7669893, "type": "User" }
[]
true
[]
761,232,610
1,457
add hrenwac_para
closed
https://github.com/huggingface/datasets/pull/1457
2020-12-10T13:16:20
2020-12-10T13:35:54
2020-12-10T13:35:10
{ "login": "IvanZidov", "id": 11391118, "type": "User" }
[]
true
[]
761,231,296
1,456
Add CC100 Dataset
Closes #773
closed
https://github.com/huggingface/datasets/pull/1456
2020-12-10T13:14:37
2020-12-14T10:20:09
2020-12-14T10:20:08
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
761,205,073
1,455
Add HEAD-QA: A Healthcare Dataset for Complex Reasoning
HEAD-QA is a multi-choice HEAlthcare Dataset, the questions come from exams to access a specialized position in the Spanish healthcare system.
closed
https://github.com/huggingface/datasets/pull/1455
2020-12-10T12:36:56
2020-12-17T17:03:32
2020-12-17T16:58:11
{ "login": "mariagrandury", "id": 57645283, "type": "User" }
[]
true
[]
761,199,862
1,454
Add kinnews_kirnews
Add kinnews and kirnews
closed
https://github.com/huggingface/datasets/pull/1454
2020-12-10T12:29:08
2020-12-17T18:34:16
2020-12-17T18:34:16
{ "login": "saradhix", "id": 1351362, "type": "User" }
[]
true
[]
761,188,657
1,453
Adding ethos dataset clean
I addressed the comments on the PR1318
closed
https://github.com/huggingface/datasets/pull/1453
2020-12-10T12:13:21
2020-12-14T15:00:46
2020-12-14T10:31:24
{ "login": "iamollas", "id": 22838900, "type": "User" }
[]
true
[]
761,104,924
1,452
SNLI dataset contains labels with value -1
``` import datasets nli_data = datasets.load_dataset("snli") train_data = nli_data['train'] train_labels = train_data['label'] label_set = set(train_labels) print(label_set) ``` **Output:** `{0, 1, 2, -1}`
closed
https://github.com/huggingface/datasets/issues/1452
2020-12-10T10:16:55
2020-12-10T17:49:55
2020-12-10T17:49:55
{ "login": "aarnetalman", "id": 11405654, "type": "User" }
[]
false
[]
761,102,770
1,451
Add European Center for Disease Control and Preventions's (ECDC) Translation Memory dataset
ECDC-TM homepage: https://ec.europa.eu/jrc/en/language-technologies/ecdc-translation-memory
closed
https://github.com/huggingface/datasets/pull/1451
2020-12-10T10:14:20
2020-12-11T16:50:09
2020-12-11T16:50:09
{ "login": "SBrandeis", "id": 33657802, "type": "User" }
[]
true
[]
761,102,429
1,450
Fix version in bible_para
closed
https://github.com/huggingface/datasets/pull/1450
2020-12-10T10:13:55
2020-12-11T16:40:41
2020-12-11T16:40:40
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
761,083,210
1,449
add W&I + LOCNESS dataset (BEA-2019 workshop shared task on GEC) [PROPER]
- **Name:** W&I + LOCNESS dataset (from the BEA-2019 workshop shared task on GEC) - **Description:** https://www.cl.cam.ac.uk/research/nl/bea2019st/#data - **Paper:** https://www.aclweb.org/anthology/W19-4406/ - **Motivation:** This is a recent dataset (actually two in one) for grammatical error correction and is used for benchmarking in this field of NLP. ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
closed
https://github.com/huggingface/datasets/pull/1449
2020-12-10T09:51:08
2020-12-11T17:07:46
2020-12-11T17:07:46
{ "login": "aseifert", "id": 4944799, "type": "User" }
[]
true
[]
761,080,776
1,448
add thai_toxicity_tweet
Thai Toxicity Tweet Corpus contains 3,300 tweets (506 tweets with texts missing) annotated by humans with guidelines including a 44-word dictionary. The author obtained 2,027 and 1,273 toxic and non-toxic tweets, respectively; these were labeled by three annotators. The result of corpus analysis indicates that tweets that include toxic words are not always toxic. Further, it is more likely that a tweet is toxic, if it contains toxic words indicating their original meaning. Moreover, disagreements in annotation are primarily because of sarcasm, unclear existing target, and word sense ambiguity. Notes from data cleaner: The data is included into [huggingface/datasets](https://www.github.com/huggingface/datasets) in Dec 2020. By this time, 506 of the tweets are not available publicly anymore. We denote these by `TWEET_NOT_FOUND` in `tweet_text`. Processing can be found at [this PR](https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/pull/1).
closed
https://github.com/huggingface/datasets/pull/1448
2020-12-10T09:48:02
2020-12-11T16:21:27
2020-12-11T16:21:27
{ "login": "cstorm125", "id": 15519308, "type": "User" }
[]
true
[]
761,067,955
1,447
Update step-by-step guide for windows
Update step-by-step guide for windows to give an alternative to `make style`.
closed
https://github.com/huggingface/datasets/pull/1447
2020-12-10T09:30:59
2020-12-10T12:18:47
2020-12-10T09:31:14
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
761,060,323
1,446
Add Bing Coronavirus Query Set
closed
https://github.com/huggingface/datasets/pull/1446
2020-12-10T09:20:46
2020-12-11T17:03:08
2020-12-11T17:03:07
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
761,057,851
1,445
Added dataset clickbait_news_bg
closed
https://github.com/huggingface/datasets/pull/1445
2020-12-10T09:17:28
2020-12-15T07:45:19
2020-12-15T07:45:19
{ "login": "tsvm", "id": 1083319, "type": "User" }
[]
true
[]
761,055,651
1,444
FileNotFound remotly, can't load a dataset
```py !pip install datasets import datasets as ds corpus = ds.load_dataset('large_spanish_corpus') ``` gives the error > FileNotFoundError: Couldn't find file locally at large_spanish_corpus/large_spanish_corpus.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/large_spanish_corpus/large_spanish_corpus.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/large_spanish_corpus/large_spanish_corpus.py not just `large_spanish_corpus`, `zest` too, but `squad` is available. this was using colab and localy
closed
https://github.com/huggingface/datasets/issues/1444
2020-12-10T09:14:47
2020-12-15T17:41:14
2020-12-15T17:41:14
{ "login": "sadakmed", "id": 18331629, "type": "User" }
[]
false
[]
761,033,061
1,443
Add OPUS Wikimedia Translations Dataset
null
closed
https://github.com/huggingface/datasets/pull/1443
2020-12-10T08:43:02
2023-09-24T09:40:41
2022-10-03T09:38:48
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
761,026,069
1,442
Create XML dummy data without loading all dataset in memory
While I was adding one XML dataset, I noticed that all the dataset was loaded in memory during the dummy data generation process (using nearly all my laptop RAM). Looking at the code, I have found that the origin is the use of `ET.parse()`. This method loads **all the file content in memory**. In order to fix this, I have refactorized the code and use `ET.iterparse()` instead, which **parses the file content incrementally**. I have also implemented a test.
closed
https://github.com/huggingface/datasets/pull/1442
2020-12-10T08:32:07
2020-12-17T09:59:43
2020-12-17T09:59:43
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
761,021,823
1,441
Add Igbo-English Machine Translation Dataset
closed
https://github.com/huggingface/datasets/pull/1441
2020-12-10T08:25:34
2020-12-11T15:54:53
2020-12-11T15:54:52
{ "login": "abhishekkrthakur", "id": 1183441, "type": "User" }
[]
true
[]
760,973,057
1,440
Adding english plaintext jokes dataset
This PR adds a dataset of 200k English plaintext Jokes from three sources: Reddit, Stupidstuff, and Wocka. Link: https://github.com/taivop/joke-dataset This is my second PR. First was: [#1269 ](https://github.com/huggingface/datasets/pull/1269)
closed
https://github.com/huggingface/datasets/pull/1440
2020-12-10T07:04:17
2020-12-13T05:22:00
2020-12-12T05:55:43
{ "login": "purvimisal", "id": 22298787, "type": "User" }
[]
true
[]
760,968,410
1,439
Update README.md
1k-10k -> 1k-1M 3 separate configs are available with min. 1K and max. 211.3k examples
closed
https://github.com/huggingface/datasets/pull/1439
2020-12-10T06:57:01
2020-12-11T15:22:53
2020-12-11T15:22:53
{ "login": "tuner007", "id": 46425391, "type": "User" }
[]
true
[]
760,962,193
1,438
A descriptive name for my changes
hind encorp resubmited
closed
https://github.com/huggingface/datasets/pull/1438
2020-12-10T06:47:24
2020-12-15T10:36:27
2020-12-15T10:36:26
{ "login": "rahul-art", "id": 56379013, "type": "User" }
[]
true
[]
760,891,879
1,437
Add Indosum dataset
null
closed
https://github.com/huggingface/datasets/pull/1437
2020-12-10T05:02:00
2022-10-03T09:38:54
2022-10-03T09:38:54
{ "login": "prasastoadi", "id": 11614678, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
760,873,132
1,436
add ALT
ALT dataset -- https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/
closed
https://github.com/huggingface/datasets/pull/1436
2020-12-10T04:17:21
2020-12-13T16:14:18
2020-12-11T15:52:41
{ "login": "chameleonTK", "id": 6429850, "type": "User" }
[]
true
[]