title
stringlengths
1
290
body
stringlengths
0
228k
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
Add SelQA Dataset
Add the SelQA Dataset, a new benchmark for selection-based question answering tasks Repo: https://github.com/emorynlp/selqa/ Paper: https://arxiv.org/pdf/1606.08513.pdf
https://github.com/huggingface/datasets/pull/1507
[ "Hii please follow me", "The CI error `FAILED tests/test_file_utils.py::TempSeedTest::test_tensorflow` is not related with this dataset and is fixed on master. You can ignore it", "merging since the Ci is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1507", "html_url": "https://github.com/huggingface/datasets/pull/1507", "diff_url": "https://github.com/huggingface/datasets/pull/1507.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1507.patch", "merged_at": "2020-12-16T16:49:23" }
1,507
true
Add nq_open question answering dataset
Added nq_open Open-domain question answering dataset. The NQ-Open task is currently being used to evaluate submissions to the EfficientQA competition, which is part of the NeurIPS 2020 competition track.
https://github.com/huggingface/datasets/pull/1506
[ "@SBrandeis thanks for the review, I applied your suggested changes, but CI is failing now not sure about the error.", "Many thanks @Nilanshrajput !\r\nThe failing tests on CI are not related to your changes, merging master on your branch should fix them :)\r\nIf you're interested in what causes the CI to fail,...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1506", "html_url": "https://github.com/huggingface/datasets/pull/1506", "diff_url": "https://github.com/huggingface/datasets/pull/1506.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1506.patch", "merged_at": null }
1,506
true
add ilist dataset
This PR will add Indo-Aryan Language Identification Shared Task Dataset.
https://github.com/huggingface/datasets/pull/1505
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1505", "html_url": "https://github.com/huggingface/datasets/pull/1505", "diff_url": "https://github.com/huggingface/datasets/pull/1505.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1505.patch", "merged_at": "2020-12-17T15:43:07" }
1,505
true
Add SentiWS dataset for pos-tagging and sentiment-scoring (German)
https://github.com/huggingface/datasets/pull/1504
[ "Hi @lhoestq @yjernite, requesting you to review this for any changes needed. Thanks! :)", "Hi @lhoestq , I have updated the PR" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1504", "html_url": "https://github.com/huggingface/datasets/pull/1504", "diff_url": "https://github.com/huggingface/datasets/pull/1504.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1504.patch", "merged_at": "2020-12-15T18:32:38" }
1,504
true
Adding COVID QA dataset in Chinese and English from UC SanDiego
https://github.com/huggingface/datasets/pull/1503
[ "Changed the pre-processing based on the comments raised in [PR-1482](https://github.com/huggingface/datasets/pull/1482).The below command is passing in my local environment:\r\n\r\n`python datasets-cli test datasets/covid_qa_ucsd/ --save_infos --all_configs --data_dir ~/Downloads/Medical-Dialogue-Dataset/CovidDail...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1503", "html_url": "https://github.com/huggingface/datasets/pull/1503", "diff_url": "https://github.com/huggingface/datasets/pull/1503.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1503.patch", "merged_at": "2020-12-17T15:29:26" }
1,503
true
Add Senti_Lex Dataset
TODO: Fix feature format issue Create dataset_info.json file Run pytests Make Style
https://github.com/huggingface/datasets/pull/1502
[ "Better will be if you close this PR and make a fresh PR", "Feel free to ping me if you also have questions about the dummy data", "also it looks like this PR includes changes about dummy_data.zip files in the ./datasets//un_pc folder. Can you remove them ?", "Thanks for all the advice @lhoestq. I've implemen...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1502", "html_url": "https://github.com/huggingface/datasets/pull/1502", "diff_url": "https://github.com/huggingface/datasets/pull/1502.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1502.patch", "merged_at": "2020-12-28T14:01:12" }
1,502
true
Adds XED dataset
https://github.com/huggingface/datasets/pull/1501
[ "Hi @lhoestq @yjernite, requesting you to review this for any changes needed. Thanks! :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1501", "html_url": "https://github.com/huggingface/datasets/pull/1501", "diff_url": "https://github.com/huggingface/datasets/pull/1501.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1501.patch", "merged_at": "2020-12-14T21:20:59" }
1,501
true
adding polsum
https://github.com/huggingface/datasets/pull/1500
[ "@lhoestq thanks for the comments! Should be fixed in the latest commit, I assume the CI errors are unrelated." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1500", "html_url": "https://github.com/huggingface/datasets/pull/1500", "diff_url": "https://github.com/huggingface/datasets/pull/1500.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1500.patch", "merged_at": "2020-12-18T09:43:43" }
1,500
true
update the dataset id_newspapers_2018
Hi, I need to update the link to the dataset. The link in the previous PR was to a small test dataset. Thanks
https://github.com/huggingface/datasets/pull/1499
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1499", "html_url": "https://github.com/huggingface/datasets/pull/1499", "diff_url": "https://github.com/huggingface/datasets/pull/1499.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1499.patch", "merged_at": "2020-12-14T15:28:07" }
1,499
true
add stereoset
StereoSet is a dataset that measures stereotype bias in language models. StereoSet consists of 17,000 sentences that measures model preferences across gender, race, religion, and profession.
https://github.com/huggingface/datasets/pull/1498
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1498", "html_url": "https://github.com/huggingface/datasets/pull/1498", "diff_url": "https://github.com/huggingface/datasets/pull/1498.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1498.patch", "merged_at": "2020-12-18T10:03:53" }
1,498
true
adding fake-news-english-5
https://github.com/huggingface/datasets/pull/1497
[ "made suggested changes and created a PR here: https://github.com/huggingface/datasets/pull/1598" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1497", "html_url": "https://github.com/huggingface/datasets/pull/1497", "diff_url": "https://github.com/huggingface/datasets/pull/1497.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1497.patch", "merged_at": null }
1,497
true
Add Multi-Dimensional Gender Bias classification data
https://parl.ai/projects/md_gender/ Mostly has the ABOUT dimension since the others are inferred from other datasets in most cases. I tried to keep the dummy data small but one of the configs has 140 splits ( > 56KB data)
https://github.com/huggingface/datasets/pull/1496
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1496", "html_url": "https://github.com/huggingface/datasets/pull/1496", "diff_url": "https://github.com/huggingface/datasets/pull/1496.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1496.patch", "merged_at": "2020-12-14T21:14:55" }
1,496
true
Opus DGT added
Dataset : http://opus.nlpl.eu/DGT.php
https://github.com/huggingface/datasets/pull/1495
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1495", "html_url": "https://github.com/huggingface/datasets/pull/1495", "diff_url": "https://github.com/huggingface/datasets/pull/1495.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1495.patch", "merged_at": "2020-12-17T14:38:41" }
1,495
true
Added Opus Wikipedia
Dataset : http://opus.nlpl.eu/Wikipedia.php
https://github.com/huggingface/datasets/pull/1494
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1494", "html_url": "https://github.com/huggingface/datasets/pull/1494", "diff_url": "https://github.com/huggingface/datasets/pull/1494.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1494.patch", "merged_at": "2020-12-17T14:38:28" }
1,494
true
Added RONEC dataset.
https://github.com/huggingface/datasets/pull/1493
[ "Thanks for the PR @iliemihai . \r\n\r\nFew comments - \r\n\r\nCan you run - \r\n`python datasets-cli dummy_data ./datasets/ronec --auto_generate` to generate dummy data.\r\n\r\nAlso, before committing files run : \r\n`make style`\r\n`flake8 datasets`\r\nthen you can add and commit files.", "> Thanks for the PR @...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1493", "html_url": "https://github.com/huggingface/datasets/pull/1493", "diff_url": "https://github.com/huggingface/datasets/pull/1493.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1493.patch", "merged_at": "2020-12-21T14:48:56" }
1,493
true
OPUS UBUNTU dataset
Dataset : http://opus.nlpl.eu/Ubuntu.php
https://github.com/huggingface/datasets/pull/1492
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1492", "html_url": "https://github.com/huggingface/datasets/pull/1492", "diff_url": "https://github.com/huggingface/datasets/pull/1492.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1492.patch", "merged_at": "2020-12-17T14:38:15" }
1,492
true
added opus GNOME data
Dataset : http://opus.nlpl.eu/GNOME.php
https://github.com/huggingface/datasets/pull/1491
[ "merging since the Ci is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1491", "html_url": "https://github.com/huggingface/datasets/pull/1491", "diff_url": "https://github.com/huggingface/datasets/pull/1491.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1491.patch", "merged_at": "2020-12-17T14:20:23" }
1,491
true
ADD: opus_rf dataset for translation
Passed all local tests. Hopefully passes all Circle CI tests too. Tried to keep the commit history clean.
https://github.com/huggingface/datasets/pull/1490
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1490", "html_url": "https://github.com/huggingface/datasets/pull/1490", "diff_url": "https://github.com/huggingface/datasets/pull/1490.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1490.patch", "merged_at": "2020-12-13T19:12:24" }
1,490
true
Fake news english 4
https://github.com/huggingface/datasets/pull/1489
[ "Thanks for the PR @MisbahKhan789 !\r\n\r\nFew comments to help you along (I'm NOT a maintainer, just offering help to unblock the process) :-\r\n - Could you re-run `make style` and fix the errors related to code quality specific to your dataset in the `datasets/fake_news_english` folder?\r\n(These seem to show er...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1489", "html_url": "https://github.com/huggingface/datasets/pull/1489", "diff_url": "https://github.com/huggingface/datasets/pull/1489.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1489.patch", "merged_at": null }
1,489
true
Adding NELL
NELL is a knowledge base and knowledge graph along with sentences used to create the KB. See http://rtw.ml.cmu.edu/rtw/ for more details.
https://github.com/huggingface/datasets/pull/1488
[ "hi @lhoestq, I wanted to push another change to this branch b/c I found a bug in the parsing. I need to swap arg1 and arg2. I tried to git push -u origin nell but it didn't work. So I tried to do git push --force -u origin nell which seems to work, but nothing is happening to this branch. I think this is because i...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1488", "html_url": "https://github.com/huggingface/datasets/pull/1488", "diff_url": "https://github.com/huggingface/datasets/pull/1488.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1488.patch", "merged_at": "2020-12-21T14:44:59" }
1,488
true
added conv_ai_3 dataset
Dataset : https://github.com/aliannejadi/ClariQ/
https://github.com/huggingface/datasets/pull/1487
[ "@lhoestq Thank you for suggesting changes. I fixed all the changes you suggested. Can you please review it again? ", "@lhoestq Thank you for reviewing and suggesting changes. I made the requested changes. Can you please review it again?", "Thanks @lhoestq for reviewing it again. I made the required changes. Ca...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1487", "html_url": "https://github.com/huggingface/datasets/pull/1487", "diff_url": "https://github.com/huggingface/datasets/pull/1487.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1487.patch", "merged_at": "2020-12-28T09:38:39" }
1,487
true
hate speech 18 dataset
This is again a PR instead of #1339, because something went wrong there.
https://github.com/huggingface/datasets/pull/1486
[ "The error `tests/test_file_utils.py::TempSeedTest::test_tensorflow` just appeared because of tensorflow's update.\r\nOnce it's fixed on master we'll be free to merge this one", "It's fixed on master now :) \r\n\r\nmerging this once" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1486", "html_url": "https://github.com/huggingface/datasets/pull/1486", "diff_url": "https://github.com/huggingface/datasets/pull/1486.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1486.patch", "merged_at": "2020-12-14T19:43:18" }
1,486
true
Re-added wiki_movies dataset due to previous PR having changes from m…
…any other unassociated files.
https://github.com/huggingface/datasets/pull/1485
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1485", "html_url": "https://github.com/huggingface/datasets/pull/1485", "diff_url": "https://github.com/huggingface/datasets/pull/1485.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1485.patch", "merged_at": "2020-12-14T14:08:22" }
1,485
true
Add peer-read dataset
https://github.com/huggingface/datasets/pull/1484
[ "> Cool thank you !\r\n> \r\n> I left a few comments\r\n\r\nThank you @lhoestq addressed your comments. Haven't changed the code but I see that tests are failing now. Do I need to rebase or something? ", "The CI error is not related to your dataset and is fixed on master.\r\nYou can ignore it" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1484", "html_url": "https://github.com/huggingface/datasets/pull/1484", "diff_url": "https://github.com/huggingface/datasets/pull/1484.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1484.patch", "merged_at": "2020-12-21T09:40:50" }
1,484
true
Added Times of India News Headlines Dataset
Dataset name: Times of India News Headlines link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/DPQMQH
https://github.com/huggingface/datasets/pull/1483
[ "@lhoestq @abhishekkrthakur what happened here ?\r\n", "@lhoestq everything alright here ?", "@tanmoyio please have patience. @lhoestq has to look at 150+ PRs and it may take time. The PR looks good to me but we wait for his confirmation :) 🤗 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1483", "html_url": "https://github.com/huggingface/datasets/pull/1483", "diff_url": "https://github.com/huggingface/datasets/pull/1483.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1483.patch", "merged_at": "2020-12-14T18:08:07" }
1,483
true
Adding medical database chinese and english
Error in creating dummy dataset
https://github.com/huggingface/datasets/pull/1482
[ "Let me know it that helps !\r\nAlso feel free to ping me if you have other questions or if I can help you.", "Now I am getting an Assertion Error!\r\n![image](https://user-images.githubusercontent.com/16264631/101943915-f5bf5600-3c11-11eb-84e5-045bbc472162.png)\r\n", "All tests have passed. However, PyTest is ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1482", "html_url": "https://github.com/huggingface/datasets/pull/1482", "diff_url": "https://github.com/huggingface/datasets/pull/1482.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1482.patch", "merged_at": "2020-12-15T18:23:53" }
1,482
true
Fix ADD_NEW_DATASET to avoid rebasing once pushed
https://github.com/huggingface/datasets/pull/1481
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1481", "html_url": "https://github.com/huggingface/datasets/pull/1481", "diff_url": "https://github.com/huggingface/datasets/pull/1481.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1481.patch", "merged_at": "2021-01-07T10:10:20" }
1,481
true
Adding the Mac-Morpho dataset
Adding the Mac-Morpho dataset, a Portuguese language dataset for Part-of-speech tagging tasks
https://github.com/huggingface/datasets/pull/1480
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1480", "html_url": "https://github.com/huggingface/datasets/pull/1480", "diff_url": "https://github.com/huggingface/datasets/pull/1480.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1480.patch", "merged_at": "2020-12-21T10:03:37" }
1,480
true
Add narrativeQA
Redo of #1368 #309 #499 In redoing the dummy data a few times, I ended up adding a load of files to git. Hopefully this should work.
https://github.com/huggingface/datasets/pull/1479
[ "@lhoestq this is now only failing some random windows test (it appears to be somewhere in wnut_17)", "This is a connection error, you can ignore it :) \r\nThe level of activity on the lib is quite overwhelming, it stresses a bit the CI ^^" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1479", "html_url": "https://github.com/huggingface/datasets/pull/1479", "diff_url": "https://github.com/huggingface/datasets/pull/1479.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1479.patch", "merged_at": "2020-12-11T13:33:23" }
1,479
true
Inconsistent argument names.
Just find it a wee bit odd that in the transformers library `predictions` are those made by the model: https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_utils.py#L51-L61 While in many datasets metrics they are the ground truth labels: https://github.com/huggingface/datasets/blob/c3f53792a744ede18d748a1133b6597fdd2d8d18/metrics/accuracy/accuracy.py#L31-L40 Do you think predictions & references should be swapped? I'd be willing to do some refactoring here if you agree.
https://github.com/huggingface/datasets/issues/1478
[ "Also for the `Accuracy` metric the `accuracy_score` method should have its args in the opposite order so `accuracy_score(predictions, references,,,)`.", "Thanks for pointing this out ! 🕵🏻 \r\nPredictions and references should indeed be swapped in the docstring.\r\nHowever, the call to `accuracy_score` should n...
null
1,478
false
Jigsaw toxicity pred
Managed to mess up my original pull request, opening a fresh one incorporating the changes suggested by @lhoestq.
https://github.com/huggingface/datasets/pull/1477
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1477", "html_url": "https://github.com/huggingface/datasets/pull/1477", "diff_url": "https://github.com/huggingface/datasets/pull/1477.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1477.patch", "merged_at": "2020-12-14T13:19:35" }
1,477
true
Add Spanish Billion Words Corpus
Add an unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web.
https://github.com/huggingface/datasets/pull/1476
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1476", "html_url": "https://github.com/huggingface/datasets/pull/1476", "diff_url": "https://github.com/huggingface/datasets/pull/1476.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1476.patch", "merged_at": "2020-12-14T13:14:31" }
1,476
true
Fix XML iterparse in opus_dogc dataset
I forgot to add `elem.clear()` to clear the element from memory.
https://github.com/huggingface/datasets/pull/1475
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1475", "html_url": "https://github.com/huggingface/datasets/pull/1475", "diff_url": "https://github.com/huggingface/datasets/pull/1475.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1475.patch", "merged_at": "2020-12-17T11:28:46" }
1,475
true
Create JSON dummy data without loading all dataset in memory
See #1442. The statement `json.load()` loads **all the file content in memory**. In order to avoid this, file content should be parsed **iteratively**, by using the library `ijson` e.g. I have refactorized the code into a function `_create_json_dummy_data` and I have added some tests.
https://github.com/huggingface/datasets/pull/1474
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1474", "html_url": "https://github.com/huggingface/datasets/pull/1474", "diff_url": "https://github.com/huggingface/datasets/pull/1474.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1474.patch", "merged_at": null }
1,474
true
add srwac
https://github.com/huggingface/datasets/pull/1473
[ "Connection error failed. Need rerun", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1473", "html_url": "https://github.com/huggingface/datasets/pull/1473", "diff_url": "https://github.com/huggingface/datasets/pull/1473.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1473.patch", "merged_at": "2020-12-17T11:40:59" }
1,473
true
add Srwac
https://github.com/huggingface/datasets/pull/1472
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1472", "html_url": "https://github.com/huggingface/datasets/pull/1472", "diff_url": "https://github.com/huggingface/datasets/pull/1472.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1472.patch", "merged_at": null }
1,472
true
Adding the HAREM dataset
Adding the HAREM dataset, a Portuguese language dataset for NER tasks
https://github.com/huggingface/datasets/pull/1471
[ "Thanks for the changes !\r\n\r\nSorry if I wasn't clear about the suggestion of adding the `raw` dataset as well.\r\nBy `raw` I meant the dataset with its original features, i.e. not tokenized to follow the conll format for NER.\r\nThe `raw` dataset has data fields `doc_text`, `doc_id` and `entities`.", "Alright...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1471", "html_url": "https://github.com/huggingface/datasets/pull/1471", "diff_url": "https://github.com/huggingface/datasets/pull/1471.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1471.patch", "merged_at": "2020-12-22T10:37:33" }
1,471
true
Add wiki lingua dataset
Hello @lhoestq , I am opening a fresh pull request as advised in my original PR https://github.com/huggingface/datasets/pull/1308 Thanks
https://github.com/huggingface/datasets/pull/1470
[ "it’s failing because of `RemoteDatasetTest.test_load_dataset_orange_sum`\r\nwhich i think is not the dataset you are doing a PR for. Try rebasing with:\r\n```\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit push -u -f origin your_branch\r\n```", "> it’s failing because of `RemoteDatasetTest.test_load_...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1470", "html_url": "https://github.com/huggingface/datasets/pull/1470", "diff_url": "https://github.com/huggingface/datasets/pull/1470.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1470.patch", "merged_at": null }
1,470
true
ADD: Wino_bias dataset
Updated PR to counter messed up history of previous one (https://github.com/huggingface/datasets/pull/1235) due to rebase. Removed manual downloading of dataset.
https://github.com/huggingface/datasets/pull/1469
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1469", "html_url": "https://github.com/huggingface/datasets/pull/1469", "diff_url": "https://github.com/huggingface/datasets/pull/1469.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1469.patch", "merged_at": "2020-12-13T19:13:57" }
1,469
true
add Indonesian newspapers (id_newspapers_2018)
The dataset contains around 500K articles (136M of words) from 7 Indonesian newspapers. The size of uncompressed 500K json files (newspapers-json.tgz) is around 2.2GB.
https://github.com/huggingface/datasets/pull/1468
[ "Looks like there's a `Path` issue on windows. Could you try switching to\r\n`glob.glob(os.path.join(article_dir, \"*.json\"))`", "> Looks like there's a `Path` issue on windows. Could you try switching to\r\n> `glob.glob(os.path.join(article_dir, \"*.json\"))`\r\n\r\nThanks, I replaced it with glob. Let's see if...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1468", "html_url": "https://github.com/huggingface/datasets/pull/1468", "diff_url": "https://github.com/huggingface/datasets/pull/1468.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1468.patch", "merged_at": "2020-12-11T17:04:41" }
1,468
true
adding snow_simplified_japanese_corpus
Adding simplified Japanese corpus "SNOW T15" and "SNOW T23". They contain original Japanese, simplified Japanese, and original English (the original text is gotten from en-ja translation corpus). Hence, it can be used not only for Japanese simplification but also for en-ja translation. - http://www.jnlp.org/SNOW/T15 - http://www.jnlp.org/SNOW/T23
https://github.com/huggingface/datasets/pull/1467
[ "merging since the CI is fixed on master", "Thank you for the updates and merging!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1467", "html_url": "https://github.com/huggingface/datasets/pull/1467", "diff_url": "https://github.com/huggingface/datasets/pull/1467.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1467.patch", "merged_at": "2020-12-17T11:25:34" }
1,467
true
Add Turkish News Category Dataset (270K).Updates were made for review…
This PR adds the **Turkish News Categories Dataset (270K)** dataset which is a text classification dataset by me and @yavuzKomecoglu. Turkish news dataset consisting of **273601 news in 17 categories**, compiled from printed media and news websites between 2010 and 2017 by the [Interpress](https://www.interpress.com/) media monitoring company. **Note**: Resubmitted as a clean version of the previous Pull Request(#1419). @SBrandeis @lhoestq
https://github.com/huggingface/datasets/pull/1466
[ "@SBrandeis, What exactly is it that makes the tests fail? Can you help me please?", "These errors\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_ajgt_twitter_ar\r\nFAILED tests/test_dataset_com...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1466", "html_url": "https://github.com/huggingface/datasets/pull/1466", "diff_url": "https://github.com/huggingface/datasets/pull/1466.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1466.patch", "merged_at": "2020-12-11T14:27:14" }
1,466
true
Add clean menyo20k data
New Clean PR for menyo20k_mt
https://github.com/huggingface/datasets/pull/1465
[ "@lhoestq rerun the tests " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1465", "html_url": "https://github.com/huggingface/datasets/pull/1465", "diff_url": "https://github.com/huggingface/datasets/pull/1465.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1465.patch", "merged_at": "2020-12-14T10:30:21" }
1,465
true
Reddit jokes
196k Reddit Jokes dataset Dataset link- https://raw.githubusercontent.com/taivop/joke-dataset/master/reddit_jokes.json
https://github.com/huggingface/datasets/pull/1464
[ "@lhoestq would you please rerun the test, ", "I re-started the test.\r\n\r\n@lhoestq let's hold off on merging for now though, having a conversation on Slack about some of the offensive content in the dataset and how/whether we want to present it." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1464", "html_url": "https://github.com/huggingface/datasets/pull/1464", "diff_url": "https://github.com/huggingface/datasets/pull/1464.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1464.patch", "merged_at": null }
1,464
true
Adding enriched_web_nlg features + handling xml bugs
This PR adds features of the enriched_web_nlg dataset that were not present yet (most notably sorted rdf triplet sets), and deals with some xml issues that led to returning no data in cases where surgery could be performed to salvage it.
https://github.com/huggingface/datasets/pull/1463
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1463", "html_url": "https://github.com/huggingface/datasets/pull/1463", "diff_url": "https://github.com/huggingface/datasets/pull/1463.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1463.patch", "merged_at": "2020-12-17T10:44:33" }
1,463
true
Added conv ai 2 (Again)
The original PR -> https://github.com/huggingface/datasets/pull/1383 Reason for creating again - The reason I had to create the PR again was due to the master rebasing issue. After rebasing the changes, all the previous commits got added to the branch.
https://github.com/huggingface/datasets/pull/1462
[ "Looking perfect to me, need to rerun the tests\r\n", "Thanks, @tanmoyio. \r\nHow do I rerun the tests? Should I change something or push a new commit?", "@rkc007 you don't need to rerun it, @lhoestq @yjernite will rerun it, as there are huge number of PRs in the queue it might take lil bit of time. ", "ive j...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1462", "html_url": "https://github.com/huggingface/datasets/pull/1462", "diff_url": "https://github.com/huggingface/datasets/pull/1462.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1462.patch", "merged_at": null }
1,462
true
Adding NewsQA dataset
Since the dataset has legal restrictions to circulate the original data. It has to be manually downloaded by the user and loaded to the library.
https://github.com/huggingface/datasets/pull/1461
[ "Generate the dummy dataset then regenerate the dataset_info.json file, ", "> Generate the dummy dataset then regenerate the dataset_info.json file,\r\n\r\nThe pytest scripts do not accept manual directory inputs for the data provided manually. This is why the tests fail. ", "don't use the --auto-generate argum...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1461", "html_url": "https://github.com/huggingface/datasets/pull/1461", "diff_url": "https://github.com/huggingface/datasets/pull/1461.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1461.patch", "merged_at": "2020-12-17T18:27:36" }
1,461
true
add Bengali Hate Speech dataset
https://github.com/huggingface/datasets/pull/1460
[ "@lhoestq I think you might want to look at the dataset, and the first data instances mentioned in the README.md is very much offensive. Though this dataset is based on hate speech but I found the dataset heavily disturbing as Bengali is my native language.", "Hi @tanmoyio indeed you're right.\r\nWe should *at le...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1460", "html_url": "https://github.com/huggingface/datasets/pull/1460", "diff_url": "https://github.com/huggingface/datasets/pull/1460.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1460.patch", "merged_at": "2021-01-04T14:08:29" }
1,460
true
Add Google Conceptual Captions Dataset
https://github.com/huggingface/datasets/pull/1459
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1459", "html_url": "https://github.com/huggingface/datasets/pull/1459", "diff_url": "https://github.com/huggingface/datasets/pull/1459.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1459.patch", "merged_at": "2022-04-14T13:07:49" }
1,459
true
Add id_nergrit_corpus
Nergrit Corpus is a dataset collection of Indonesian Named Entity Recognition, Statement Extraction, and Sentiment Analysis. Recently my PR for id_nergrit_ner has been accepted and merged to the main branch. The id_nergrit_ner has only one dataset (NER), and this new PR renamed the dataset from id_nergrit_ner to id_nergrit_corpus and added 2 other remaining datasets (Statement Extraction, and Sentiment Analysis.)
https://github.com/huggingface/datasets/pull/1458
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1458", "html_url": "https://github.com/huggingface/datasets/pull/1458", "diff_url": "https://github.com/huggingface/datasets/pull/1458.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1458.patch", "merged_at": "2020-12-17T10:45:15" }
1,458
true
add hrenwac_para
https://github.com/huggingface/datasets/pull/1457
[ "duplicate" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1457", "html_url": "https://github.com/huggingface/datasets/pull/1457", "diff_url": "https://github.com/huggingface/datasets/pull/1457.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1457.patch", "merged_at": null }
1,457
true
Add CC100 Dataset
Closes #773
https://github.com/huggingface/datasets/pull/1456
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1456", "html_url": "https://github.com/huggingface/datasets/pull/1456", "diff_url": "https://github.com/huggingface/datasets/pull/1456.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1456.patch", "merged_at": "2020-12-14T10:20:07" }
1,456
true
Add HEAD-QA: A Healthcare Dataset for Complex Reasoning
HEAD-QA is a multi-choice HEAlthcare Dataset, the questions come from exams to access a specialized position in the Spanish healthcare system.
https://github.com/huggingface/datasets/pull/1455
[ "Thank you for your review @lhoestq, I've changed the types of `qid` and `ra` and now they are integers as `aid`.\r\n\r\nReady for another review!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1455", "html_url": "https://github.com/huggingface/datasets/pull/1455", "diff_url": "https://github.com/huggingface/datasets/pull/1455.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1455.patch", "merged_at": "2020-12-17T16:58:11" }
1,455
true
Add kinnews_kirnews
Add kinnews and kirnews
https://github.com/huggingface/datasets/pull/1454
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1454", "html_url": "https://github.com/huggingface/datasets/pull/1454", "diff_url": "https://github.com/huggingface/datasets/pull/1454.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1454.patch", "merged_at": "2020-12-17T18:34:16" }
1,454
true
Adding ethos dataset clean
I addressed the comments on the PR1318
https://github.com/huggingface/datasets/pull/1453
[ "> Thanks !\r\n\r\nThanks as well for your hard work 😊!!", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1453", "html_url": "https://github.com/huggingface/datasets/pull/1453", "diff_url": "https://github.com/huggingface/datasets/pull/1453.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1453.patch", "merged_at": "2020-12-14T10:31:24" }
1,453
true
SNLI dataset contains labels with value -1
``` import datasets nli_data = datasets.load_dataset("snli") train_data = nli_data['train'] train_labels = train_data['label'] label_set = set(train_labels) print(label_set) ``` **Output:** `{0, 1, 2, -1}`
https://github.com/huggingface/datasets/issues/1452
[ "I believe the `-1` label is used for missing/NULL data as per HuggingFace Dataset conventions. If I recall correctly SNLI has some entries with no (gold) labels in the dataset.", "Ah, you're right. The dataset has some pairs with missing labels. Thanks for reminding me." ]
null
1,452
false
Add European Center for Disease Control and Preventions's (ECDC) Translation Memory dataset
ECDC-TM homepage: https://ec.europa.eu/jrc/en/language-technologies/ecdc-translation-memory
https://github.com/huggingface/datasets/pull/1451
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1451", "html_url": "https://github.com/huggingface/datasets/pull/1451", "diff_url": "https://github.com/huggingface/datasets/pull/1451.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1451.patch", "merged_at": "2020-12-11T16:50:09" }
1,451
true
Fix version in bible_para
https://github.com/huggingface/datasets/pull/1450
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1450", "html_url": "https://github.com/huggingface/datasets/pull/1450", "diff_url": "https://github.com/huggingface/datasets/pull/1450.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1450.patch", "merged_at": "2020-12-11T16:40:40" }
1,450
true
add W&I + LOCNESS dataset (BEA-2019 workshop shared task on GEC) [PROPER]
- **Name:** W&I + LOCNESS dataset (from the BEA-2019 workshop shared task on GEC) - **Description:** https://www.cl.cam.ac.uk/research/nl/bea2019st/#data - **Paper:** https://www.aclweb.org/anthology/W19-4406/ - **Motivation:** This is a recent dataset (actually two in one) for grammatical error correction and is used for benchmarking in this field of NLP. ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
https://github.com/huggingface/datasets/pull/1449
[ "linter your code with flake8 and also run the commands present in Makefile for proper formatting \r\n", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1449", "html_url": "https://github.com/huggingface/datasets/pull/1449", "diff_url": "https://github.com/huggingface/datasets/pull/1449.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1449.patch", "merged_at": "2020-12-11T17:07:46" }
1,449
true
add thai_toxicity_tweet
Thai Toxicity Tweet Corpus contains 3,300 tweets (506 tweets with texts missing) annotated by humans with guidelines including a 44-word dictionary. The author obtained 2,027 and 1,273 toxic and non-toxic tweets, respectively; these were labeled by three annotators. The result of corpus analysis indicates that tweets that include toxic words are not always toxic. Further, it is more likely that a tweet is toxic, if it contains toxic words indicating their original meaning. Moreover, disagreements in annotation are primarily because of sarcasm, unclear existing target, and word sense ambiguity. Notes from data cleaner: The data is included into [huggingface/datasets](https://www.github.com/huggingface/datasets) in Dec 2020. By this time, 506 of the tweets are not available publicly anymore. We denote these by `TWEET_NOT_FOUND` in `tweet_text`. Processing can be found at [this PR](https://github.com/tmu-nlp/ThaiToxicityTweetCorpus/pull/1).
https://github.com/huggingface/datasets/pull/1448
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1448", "html_url": "https://github.com/huggingface/datasets/pull/1448", "diff_url": "https://github.com/huggingface/datasets/pull/1448.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1448.patch", "merged_at": "2020-12-11T16:21:27" }
1,448
true
Update step-by-step guide for windows
Update step-by-step guide for windows to give an alternative to `make style`.
https://github.com/huggingface/datasets/pull/1447
[ "Hi @thomwolf, for simplification purposes, I think you could remove the \"`pip install ...`\" steps from this commit, 'cause these deps (black, isort, flake8) are already installed on `pip install -e \".[dev]\"` on the [Start by preparing your environment](https://github.com/huggingface/datasets/blob/704107f924e74...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1447", "html_url": "https://github.com/huggingface/datasets/pull/1447", "diff_url": "https://github.com/huggingface/datasets/pull/1447.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1447.patch", "merged_at": "2020-12-10T09:31:14" }
1,447
true
Add Bing Coronavirus Query Set
https://github.com/huggingface/datasets/pull/1446
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1446", "html_url": "https://github.com/huggingface/datasets/pull/1446", "diff_url": "https://github.com/huggingface/datasets/pull/1446.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1446.patch", "merged_at": "2020-12-11T17:03:07" }
1,446
true
Added dataset clickbait_news_bg
https://github.com/huggingface/datasets/pull/1445
[ "Looks like this PR includes changes about many other files than the ones for clickbait_news_bg\r\n\r\nCan you create another branch and another PR please ?", "I created a new branch with the dataset code and submitted a new PR for it: https://github.com/huggingface/datasets/pull/1568" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1445", "html_url": "https://github.com/huggingface/datasets/pull/1445", "diff_url": "https://github.com/huggingface/datasets/pull/1445.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1445.patch", "merged_at": null }
1,445
true
FileNotFound remotly, can't load a dataset
```py !pip install datasets import datasets as ds corpus = ds.load_dataset('large_spanish_corpus') ``` gives the error > FileNotFoundError: Couldn't find file locally at large_spanish_corpus/large_spanish_corpus.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/large_spanish_corpus/large_spanish_corpus.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/large_spanish_corpus/large_spanish_corpus.py not just `large_spanish_corpus`, `zest` too, but `squad` is available. this was using colab and localy
https://github.com/huggingface/datasets/issues/1444
[ "This dataset will be available in version-2 of the library. If you want to use this dataset now, install datasets from `master` branch rather.\r\n\r\nCommand to install datasets from `master` branch:\r\n`!pip install git+https://github.com/huggingface/datasets.git@master`", "Closing this, thanks @VasudevGupta7 "...
null
1,444
false
Add OPUS Wikimedia Translations Dataset
null
https://github.com/huggingface/datasets/pull/1443
[ "Thanks for your contribution, @abhishekkrthakur. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tel...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1443", "html_url": "https://github.com/huggingface/datasets/pull/1443", "diff_url": "https://github.com/huggingface/datasets/pull/1443.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1443.patch", "merged_at": null }
1,443
true
Create XML dummy data without loading all dataset in memory
While I was adding one XML dataset, I noticed that all the dataset was loaded in memory during the dummy data generation process (using nearly all my laptop RAM). Looking at the code, I have found that the origin is the use of `ET.parse()`. This method loads **all the file content in memory**. In order to fix this, I have refactorized the code and use `ET.iterparse()` instead, which **parses the file content incrementally**. I have also implemented a test.
https://github.com/huggingface/datasets/pull/1442
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1442", "html_url": "https://github.com/huggingface/datasets/pull/1442", "diff_url": "https://github.com/huggingface/datasets/pull/1442.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1442.patch", "merged_at": "2020-12-17T09:59:43" }
1,442
true
Add Igbo-English Machine Translation Dataset
https://github.com/huggingface/datasets/pull/1441
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1441", "html_url": "https://github.com/huggingface/datasets/pull/1441", "diff_url": "https://github.com/huggingface/datasets/pull/1441.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1441.patch", "merged_at": "2020-12-11T15:54:52" }
1,441
true
Adding english plaintext jokes dataset
This PR adds a dataset of 200k English plaintext Jokes from three sources: Reddit, Stupidstuff, and Wocka. Link: https://github.com/taivop/joke-dataset This is my second PR. First was: [#1269 ](https://github.com/huggingface/datasets/pull/1269)
https://github.com/huggingface/datasets/pull/1440
[ "Hi @purvimisal, thanks for your contributions!\r\n\r\nThis jokes dataset has come up before, and after a conversation with the initial submitter, we decided not to add it then. Humor is important, but looking at the actual data points in this set raises several concerns :) \r\n\r\nThe main issue is the Reddit part...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1440", "html_url": "https://github.com/huggingface/datasets/pull/1440", "diff_url": "https://github.com/huggingface/datasets/pull/1440.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1440.patch", "merged_at": null }
1,440
true
Update README.md
1k-10k -> 1k-1M 3 separate configs are available with min. 1K and max. 211.3k examples
https://github.com/huggingface/datasets/pull/1439
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1439", "html_url": "https://github.com/huggingface/datasets/pull/1439", "diff_url": "https://github.com/huggingface/datasets/pull/1439.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1439.patch", "merged_at": "2020-12-11T15:22:53" }
1,439
true
A descriptive name for my changes
hind encorp resubmited
https://github.com/huggingface/datasets/pull/1438
[ "I have noticed that the master branch of your fork has diverged from the one of the repo. This is probably what causes the mess in the github diff \"Files changed\".\r\n\r\nI would suggest to re-fork the `datasets` repo and recreate a new branch and a new PR. ", "You're pretty close to having all things ready to...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1438", "html_url": "https://github.com/huggingface/datasets/pull/1438", "diff_url": "https://github.com/huggingface/datasets/pull/1438.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1438.patch", "merged_at": null }
1,438
true
Add Indosum dataset
null
https://github.com/huggingface/datasets/pull/1437
[ "Hi @prasastoadi have you had a chance to take a look at my suggestions ?\r\n\r\nFeel free to ping ;e if you have questions or when you're ready for a review", "Thanks for your contribution, @prasastoadi. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1437", "html_url": "https://github.com/huggingface/datasets/pull/1437", "diff_url": "https://github.com/huggingface/datasets/pull/1437.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1437.patch", "merged_at": null }
1,437
true
add ALT
ALT dataset -- https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/
https://github.com/huggingface/datasets/pull/1436
[ "The errors in de CI are fixed on master so it's fine" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1436", "html_url": "https://github.com/huggingface/datasets/pull/1436", "diff_url": "https://github.com/huggingface/datasets/pull/1436.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1436.patch", "merged_at": "2020-12-11T15:52:41" }
1,436
true
Add FreebaseQA dataset
This PR adds the FreebaseQA dataset: A Trivia-type QA Data Set over the Freebase Knowledge Graph Repo: https://github.com/kelvin-jiang/FreebaseQA Paper: https://www.aclweb.org/anthology/N19-1028.pdf ## TODO: create dummy data Error encountered when running `python datasets-cli dummy_data datasets/freebase_qa --auto_generate` ``` f"Couldn't parse columns {list(json_data.keys())}. " ValueError: Couldn't parse columns ['Dataset', 'Version', 'Questions']. Maybe specify which json field must be used to read the data with --json_field <my_field>. ```
https://github.com/huggingface/datasets/pull/1435
[ "@yjernite @lhoestq Any suggestions on how to get the dummy data generator to recognize the columns? The structure of the json is:\r\n```\r\n{\r\n \"Dataset\": \"FreebaseQA-eval\", \r\n \"Version\": \"1.0\", \r\n \"Questions\": [\r\n {\r\n \"Question-ID\": \"FreebaseQA-eval-0\", \r\n \"RawQuestion\"...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1435", "html_url": "https://github.com/huggingface/datasets/pull/1435", "diff_url": "https://github.com/huggingface/datasets/pull/1435.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1435.patch", "merged_at": null }
1,435
true
add_sofc_materials_articles
adding [SOFC-Exp Corpus](https://arxiv.org/abs/2006.03039)
https://github.com/huggingface/datasets/pull/1434
[ "Hey @lhoestq , thanks for the feedback on this! I updated the `_generate_examples` with some comments on the process, and reduced the `dummy_data.zip` down quite a bit as well. \r\n\r\nFor the dummy data, I reduced the text to only three sentences, and aligned the corresponding entity/token/sentence annotations to...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1434", "html_url": "https://github.com/huggingface/datasets/pull/1434", "diff_url": "https://github.com/huggingface/datasets/pull/1434.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1434.patch", "merged_at": "2020-12-17T09:59:54" }
1,434
true
Adding the ASSIN 2 dataset
Adding the ASSIN 2 dataset, a Portuguese language dataset for Natural Language Inference and Semantic Similarity Scoring
https://github.com/huggingface/datasets/pull/1433
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1433", "html_url": "https://github.com/huggingface/datasets/pull/1433", "diff_url": "https://github.com/huggingface/datasets/pull/1433.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1433.patch", "merged_at": "2020-12-11T14:32:56" }
1,433
true
Adding journalists questions dataset
This is my first dataset to be added to HF.
https://github.com/huggingface/datasets/pull/1432
[ "@lhoestq Thanks a lot for checking! I hope I addressed all your comments. ", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1432", "html_url": "https://github.com/huggingface/datasets/pull/1432", "diff_url": "https://github.com/huggingface/datasets/pull/1432.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1432.patch", "merged_at": "2020-12-14T13:51:04" }
1,432
true
Ar cov19
Adding ArCOV-19 dataset. ArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 30th of April 2020. ArCOV-19 is the first publicly-available Arabic Twitter dataset covering COVID-19 pandemic that includes over 1M tweets alongside the propagation networks of the most-popular subset of them (i.e., most-retweeted and-liked). The propagation networks include both retweets and conversational threads (i.e., threads of replies). ArCOV-19 is designed to enable research under several domains including natural language processing, information retrieval, and social computing, among others.
https://github.com/huggingface/datasets/pull/1431
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1431", "html_url": "https://github.com/huggingface/datasets/pull/1431", "diff_url": "https://github.com/huggingface/datasets/pull/1431.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1431.patch", "merged_at": "2020-12-11T15:01:23" }
1,431
true
Add 1.5 billion words Arabic corpus
Needs https://github.com/huggingface/datasets/pull/1429 to work.
https://github.com/huggingface/datasets/pull/1430
[ "Can't pass dummy data tests. For the instructions, it asks me to generate the following file `dummy_data/Youm7_XML_utf_8.rar/Youm7_utf_8.xml` which is strange, any ideas @lhoestq ?\r\n\r\ncc: I tested the data locally and it works, maybe the dummy tests doesn't support `rar` ? ", "In the dummy_data.zip files you...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1430", "html_url": "https://github.com/huggingface/datasets/pull/1430", "diff_url": "https://github.com/huggingface/datasets/pull/1430.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1430.patch", "merged_at": "2020-12-22T10:03:59" }
1,430
true
extract rar files
Unfortunately, I didn't find any native python libraries for extracting rar files. The user has to manually install `sudo apt-get install unrar`. Discussion with @yjernite is in the slack channel.
https://github.com/huggingface/datasets/pull/1429
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1429", "html_url": "https://github.com/huggingface/datasets/pull/1429", "diff_url": "https://github.com/huggingface/datasets/pull/1429.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1429.patch", "merged_at": "2020-12-18T15:03:37" }
1,429
true
Add twi wordsim353
Add twi WordSim 353
https://github.com/huggingface/datasets/pull/1428
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1428", "html_url": "https://github.com/huggingface/datasets/pull/1428", "diff_url": "https://github.com/huggingface/datasets/pull/1428.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1428.patch", "merged_at": "2020-12-11T13:57:32" }
1,428
true
Hebrew project BenYehuda
Added Hebrew corpus from https://github.com/projectbenyehuda/public_domain_dump
https://github.com/huggingface/datasets/pull/1427
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1427", "html_url": "https://github.com/huggingface/datasets/pull/1427", "diff_url": "https://github.com/huggingface/datasets/pull/1427.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1427.patch", "merged_at": "2020-12-11T17:39:23" }
1,427
true
init commit for MultiReQA for third PR with all issues fixed
3rd PR w.r.t. PR #1349 with all the issues fixed. As #1349 had uploaded other files along with the multi_re_qa dataset
https://github.com/huggingface/datasets/pull/1426
[ "good dataset card as well :) ", "@lhoestq Thank you :) " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1426", "html_url": "https://github.com/huggingface/datasets/pull/1426", "diff_url": "https://github.com/huggingface/datasets/pull/1426.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1426.patch", "merged_at": "2020-12-11T13:37:08" }
1,426
true
Add german common crawl dataset
Adding a subpart of the Common Crawl which was extracted with this repo https://github.com/facebookresearch/cc_net and additionally filtered for duplicates
https://github.com/huggingface/datasets/pull/1425
[ "Hi @Phil1108 !\r\nHave you had a chance to take a look at my suggestions ?\r\nFeel free to ping me if you have questions or if you're ready for a review\r\n\r\nThanks again for adding this dataset, this one is very useful !", "> \r\n> \r\n> Hi @Phil1108 !\r\n> Have you had a chance to take a look at my suggestio...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1425", "html_url": "https://github.com/huggingface/datasets/pull/1425", "diff_url": "https://github.com/huggingface/datasets/pull/1425.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1425.patch", "merged_at": null }
1,425
true
Add yoruba wordsim353
Added WordSim-353 evaluation dataset for Yoruba
https://github.com/huggingface/datasets/pull/1424
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1424", "html_url": "https://github.com/huggingface/datasets/pull/1424", "diff_url": "https://github.com/huggingface/datasets/pull/1424.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1424.patch", "merged_at": null }
1,424
true
Imppres
2nd PR ever! Hopefully I'm starting to get the hang of this. This is for the IMPPRES dataset. Please let me know of any corrections or changes that need to be made.
https://github.com/huggingface/datasets/pull/1423
[ "Feel free to ping me once you're ready for another review :) ", "For sure! Gonna work on this now!", "I incorporated all the changes but when I go to rebase I get the following error:\r\n```python\r\naclifton@pop-os:~/hf_datasets_sprint/datasets$ git rebase upstream/master\r\nerror: cannot rebase: You have uns...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1423", "html_url": "https://github.com/huggingface/datasets/pull/1423", "diff_url": "https://github.com/huggingface/datasets/pull/1423.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1423.patch", "merged_at": "2020-12-17T18:27:14" }
1,423
true
Can't map dataset (loaded from csv)
Hello! I am trying to load single csv file with two columns: ('label': str, 'text' str), where is label is str of two possible classes. Below steps are similar with [this notebook](https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing), where bert model and tokenizer are used to classify lmdb loaded dataset. Only one difference it is the dataset loaded from .csv file. Here is how I load it: ```python data_path = 'data.csv' data = pd.read_csv(data_path) # process class name to indices classes = ['neg', 'pos'] class_to_idx = { cl: i for i, cl in enumerate(classes) } # now data is like {'label': int, 'text' str} data['label'] = data['label'].apply(lambda x: class_to_idx[x]) # load dataset and map it with defined `tokenize` function features = Features({ target: ClassLabel(num_classes=2, names=['neg', 'pos'], names_file=None, id=None), feature: Value(dtype='string', id=None), }) dataset = Dataset.from_pandas(data, features=features) dataset.map(tokenize, batched=True, batch_size=len(dataset)) ``` It ruins on the last line with following error: ``` --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-112-32b6275ce418> in <module>() 9 }) 10 dataset = Dataset.from_pandas(data, features=features) ---> 11 dataset.map(tokenizer, batched=True, batch_size=len(dataset)) 2 frames /usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1237 test_inputs = self[:2] if batched else self[0] 1238 test_indices = [0, 1] if batched else 0 -> 1239 update_data = does_function_return_dict(test_inputs, test_indices) 1240 logger.info("Testing finished, running the mapping function on the dataset") 1241 /usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py in does_function_return_dict(inputs, indices) 1208 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns] 1209 processed_inputs = ( -> 1210 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) 1211 ) 1212 does_return_dict = isinstance(processed_inputs, Mapping) /usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 2281 ) 2282 ), ( -> 2283 "text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) " 2284 "or `List[List[str]]` (batch of pretokenized examples)." 2285 ) AssertionError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples). ``` which I think is not expected. I also tried the same steps using `Dataset.from_csv` which resulted in the same error. For reproducing this, I used [this dataset from kaggle](https://www.kaggle.com/team-ai/spam-text-message-classification)
https://github.com/huggingface/datasets/issues/1422
[ "Please could you post the whole script? I can't reproduce your issue. After updating the feature names/labels to match with the data, everything works fine for me. Try to update datasets/transformers to the newest version.", "Actually, the problem was how `tokenize` function was defined. This was completely my s...
null
1,422
false
adding fake-news-english-2
https://github.com/huggingface/datasets/pull/1421
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1421", "html_url": "https://github.com/huggingface/datasets/pull/1421", "diff_url": "https://github.com/huggingface/datasets/pull/1421.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1421.patch", "merged_at": null }
1,421
true
Add dataset yoruba_wordsim353
Contains loading script as well as dataset card including YAML tags.
https://github.com/huggingface/datasets/pull/1420
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1420", "html_url": "https://github.com/huggingface/datasets/pull/1420", "diff_url": "https://github.com/huggingface/datasets/pull/1420.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1420.patch", "merged_at": "2020-12-11T13:34:04" }
1,420
true
Add Turkish News Category Dataset (270K)
This PR adds the Turkish News Categories Dataset (270K) dataset which is a text classification dataset by me and @yavuzKomecoglu. Turkish news dataset consisting of **273601 news** in **17 categories**, compiled from printed media and news websites between 2010 and 2017 by the [Interpress](https://www.interpress.com/) media monitoring company.
https://github.com/huggingface/datasets/pull/1419
[ "@lhoestq, can you please review this PR?\r\n", "@SBrandeis,\r\nSorry. All of the latest version came to my branch. You can find final version. \r\nResubmitted as a clean final version of #1466\r\nI have completed all the review comments.", "Closing this as PR is now https://github.com/huggingface/datasets/pull...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1419", "html_url": "https://github.com/huggingface/datasets/pull/1419", "diff_url": "https://github.com/huggingface/datasets/pull/1419.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1419.patch", "merged_at": null }
1,419
true
Add arabic dialects
Data loading script and dataset card for Dialectal Arabic Resources dataset. Fixed git issues from PR #976
https://github.com/huggingface/datasets/pull/1418
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1418", "html_url": "https://github.com/huggingface/datasets/pull/1418", "diff_url": "https://github.com/huggingface/datasets/pull/1418.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1418.patch", "merged_at": "2020-12-17T09:40:56" }
1,418
true
WIP: Vinay/add peer read dataset
https://github.com/huggingface/datasets/pull/1417
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1417", "html_url": "https://github.com/huggingface/datasets/pull/1417", "diff_url": "https://github.com/huggingface/datasets/pull/1417.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1417.patch", "merged_at": null }
1,417
true
Add Shrinked Turkish NER from Kaggle.
Add Shrinked Turkish NER from [Kaggle](https://www.kaggle.com/behcetsenturk/shrinked-twnertc-turkish-ner-data-by-kuzgunlar).
https://github.com/huggingface/datasets/pull/1416
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1416", "html_url": "https://github.com/huggingface/datasets/pull/1416", "diff_url": "https://github.com/huggingface/datasets/pull/1416.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1416.patch", "merged_at": "2020-12-11T11:23:31" }
1,416
true
Add Hate Speech and Offensive Language Detection dataset
Add [Hate Speech and Offensive Language Detection dataset](https://github.com/t-davidson/hate-speech-and-offensive-language) from [this paper](https://arxiv.org/abs/1703.04009).
https://github.com/huggingface/datasets/pull/1415
[ "@lhoestq done! The failing testes don't seem to be related, it seems to be a connection issue, if I understand it correctly.", "@lhoestq done!", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1415", "html_url": "https://github.com/huggingface/datasets/pull/1415", "diff_url": "https://github.com/huggingface/datasets/pull/1415.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1415.patch", "merged_at": "2020-12-14T16:25:31" }
1,415
true
Adding BioCreative II Gene Mention corpus
Adding BioCreative II Gene Mention corpus
https://github.com/huggingface/datasets/pull/1414
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1414", "html_url": "https://github.com/huggingface/datasets/pull/1414", "diff_url": "https://github.com/huggingface/datasets/pull/1414.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1414.patch", "merged_at": "2020-12-11T11:17:40" }
1,414
true
Add OffComBR
Add [OffComBR](https://github.com/rogersdepelle/OffComBR) from [Offensive Comments in the Brazilian Web: a dataset and baseline results](https://sol.sbc.org.br/index.php/brasnam/article/view/3260/3222) paper. But I'm having a hard time generating dummy data since the original dataset extion is `.arff` and the [_create_dummy_data function](https://github.com/huggingface/datasets/blob/a4aeaf911240057286a01bff1b1d75a89aedd57b/src/datasets/commands/dummy_data.py#L185) doesn't allow it.
https://github.com/huggingface/datasets/pull/1413
[ "Hello @hugoabonizio, thanks for the contribution.\r\nRegarding the fake data, you can generate it manually.\r\nRunning the `python datasets-cli dummy_data datasets/offcombr` should give you instructions on how to manually create the dummy data.\r\nFor reference, here is a spec for `.arff` files : https://www.cs.wa...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1413", "html_url": "https://github.com/huggingface/datasets/pull/1413", "diff_url": "https://github.com/huggingface/datasets/pull/1413.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1413.patch", "merged_at": "2020-12-14T16:51:10" }
1,413
true
Adding the ASSIN dataset
Adding the ASSIN dataset, a Portuguese language dataset for Natural Language Inference and Semantic Similarity Scoring
https://github.com/huggingface/datasets/pull/1412
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1412", "html_url": "https://github.com/huggingface/datasets/pull/1412", "diff_url": "https://github.com/huggingface/datasets/pull/1412.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1412.patch", "merged_at": "2020-12-11T10:41:10" }
1,412
true
2 typos
Corrected 2 typos
https://github.com/huggingface/datasets/pull/1411
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1411", "html_url": "https://github.com/huggingface/datasets/pull/1411", "diff_url": "https://github.com/huggingface/datasets/pull/1411.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1411.patch", "merged_at": "2020-12-11T10:39:05" }
1,411
true
Add penn treebank dataset
https://github.com/huggingface/datasets/pull/1410
[ "@yjernite I have updated the PR to be language modeling task specific. Please review!\r\n", "Yes a line corresponds to a sentence in this data." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1410", "html_url": "https://github.com/huggingface/datasets/pull/1410", "diff_url": "https://github.com/huggingface/datasets/pull/1410.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1410.patch", "merged_at": "2020-12-16T09:38:23" }
1,410
true
Adding the ASSIN dataset
Adding the ASSIN dataset, a Portuguese language dataset for Natural Language Inference and Semantic Similarity Scoring
https://github.com/huggingface/datasets/pull/1409
[ "I wrongly commited data from another branch in this PR, I'll close this a reopen another PR with the fixed branch" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1409", "html_url": "https://github.com/huggingface/datasets/pull/1409", "diff_url": "https://github.com/huggingface/datasets/pull/1409.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1409.patch", "merged_at": null }
1,409
true
adding fake-news-english
https://github.com/huggingface/datasets/pull/1408
[ "also don't forget to format your code using `make style` to fix the CI" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1408", "html_url": "https://github.com/huggingface/datasets/pull/1408", "diff_url": "https://github.com/huggingface/datasets/pull/1408.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1408.patch", "merged_at": null }
1,408
true