title stringlengths 1 290 | body stringlengths 0 228k ⌀ | html_url stringlengths 46 51 | comments list | pull_request dict | number int64 1 5.59k | is_pull_request bool 2 classes |
|---|---|---|---|---|---|---|
Add Tweet Eval Dataset | https://github.com/huggingface/datasets/pull/1407 | [
"Hi @lhoestq,\r\n\r\nSeeing that it has been almost two months to this draft, I'm willing to take this forward if you and @abhishekkrthakur don't mind. :)",
"Hi @gchhablani !\r\nSure if @abhishekkrthakur doesn't mind\r\nThanks for your help :)",
"Please feel free :) ",
"Hi @lhoestq, @abhishekkrthakur \r\n\r\n... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1407",
"html_url": "https://github.com/huggingface/datasets/pull/1407",
"diff_url": "https://github.com/huggingface/datasets/pull/1407.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1407.patch",
"merged_at": null
} | 1,407 | true | |
Add Portuguese Hate Speech dataset | Binary Portuguese Hate Speech dataset from [this paper](https://www.aclweb.org/anthology/W19-3510/). | https://github.com/huggingface/datasets/pull/1406 | [
"@lhoestq done! (The failing tests don't seem to be related)",
"merging since the CI is fixed on master"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1406",
"html_url": "https://github.com/huggingface/datasets/pull/1406",
"diff_url": "https://github.com/huggingface/datasets/pull/1406.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1406.patch",
"merged_at": "2020-12-14T16:22:20"
} | 1,406 | true |
Adding TaPaCo Dataset with README.md | https://github.com/huggingface/datasets/pull/1405 | [
"We want to keep the repo as light as possible so that it doesn't take ages to clone, that's why we ask for small dummy data files (especially when there are many of them). Let me know if you have questions or if we can help you on this",
"Hello @lhoestq , made the changes as you suggested and pushed, please revi... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1405",
"html_url": "https://github.com/huggingface/datasets/pull/1405",
"diff_url": "https://github.com/huggingface/datasets/pull/1405.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1405.patch",
"merged_at": "2020-12-13T19:11:18"
} | 1,405 | true | |
Add Acronym Identification Dataset | https://github.com/huggingface/datasets/pull/1404 | [
"fixed @lhoestq "
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1404",
"html_url": "https://github.com/huggingface/datasets/pull/1404",
"diff_url": "https://github.com/huggingface/datasets/pull/1404.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1404.patch",
"merged_at": "2020-12-14T13:12:00"
} | 1,404 | true | |
Add dataset clickbait_news_bg | Adding a new dataset - clickbait_news_bg | https://github.com/huggingface/datasets/pull/1403 | [
"Closing this pull request, will submit a new one for this dataset."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1403",
"html_url": "https://github.com/huggingface/datasets/pull/1403",
"diff_url": "https://github.com/huggingface/datasets/pull/1403.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1403.patch",
"merged_at": null
} | 1,403 | true |
adding covid-tweets-japanese (again) | I had mistaken use git rebase, I was so hurried to fix it. However, I didn't fully consider the use of git reset , so I unintendedly stopped PR (#1367) altogether. Sorry about that.
I'll make a new PR. | https://github.com/huggingface/datasets/pull/1402 | [
"README.md is not created yet. I'll add it soon.",
"Thank you for your detailed code review! It's so helpful.\r\nI'll reflect them to the code in 24 hours.\r\n\r\nYou may have told me in Slack (I cannot find the conversation log though I've looked through threads), but I'm sorry it seems I'm still misunderstandin... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1402",
"html_url": "https://github.com/huggingface/datasets/pull/1402",
"diff_url": "https://github.com/huggingface/datasets/pull/1402.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1402.patch",
"merged_at": "2020-12-13T17:47:36"
} | 1,402 | true |
Add reasoning_bg | Adding reading comprehension dataset for Bulgarian language | https://github.com/huggingface/datasets/pull/1401 | [
"Hi @saradhix have you had the chance to reduce the size of the dummy data ?\r\n\r\nFeel free to ping me when it's done so we can merge :) ",
"@lhoestq I have reduced the size of the dummy data manually and pushed the changes.",
"The CI errors are not related to your dataset.\r\nThey're fixed on master, you can... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1401",
"html_url": "https://github.com/huggingface/datasets/pull/1401",
"diff_url": "https://github.com/huggingface/datasets/pull/1401.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1401.patch",
"merged_at": "2020-12-17T16:50:42"
} | 1,401 | true |
Add European Union Education and Culture Translation Memory (EAC-TM) dataset | Adding the EAC Translation Memory dataset : https://ec.europa.eu/jrc/en/language-technologies/eac-translation-memory | https://github.com/huggingface/datasets/pull/1400 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1400",
"html_url": "https://github.com/huggingface/datasets/pull/1400",
"diff_url": "https://github.com/huggingface/datasets/pull/1400.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1400.patch",
"merged_at": "2020-12-14T13:06:47"
} | 1,400 | true |
Add HoVer Dataset | HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification
https://arxiv.org/abs/2011.03088 | https://github.com/huggingface/datasets/pull/1399 | [
"@lhoestq all comments addressed :) ",
"merging since the CI is fixed on master"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1399",
"html_url": "https://github.com/huggingface/datasets/pull/1399",
"diff_url": "https://github.com/huggingface/datasets/pull/1399.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1399.patch",
"merged_at": "2020-12-14T10:57:22"
} | 1,399 | true |
Add Neural Code Search Dataset | https://github.com/huggingface/datasets/pull/1398 | [
"@lhoestq Refactored into new branch, please review :) ",
"The `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine",
"merging since the CI is fixed on master"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1398",
"html_url": "https://github.com/huggingface/datasets/pull/1398",
"diff_url": "https://github.com/huggingface/datasets/pull/1398.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1398.patch",
"merged_at": "2020-12-09T18:02:27"
} | 1,398 | true | |
datasets card-creator link added | dataset card creator link has been added
link: https://huggingface.co/datasets/card-creator/ | https://github.com/huggingface/datasets/pull/1397 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1397",
"html_url": "https://github.com/huggingface/datasets/pull/1397",
"diff_url": "https://github.com/huggingface/datasets/pull/1397.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1397.patch",
"merged_at": null
} | 1,397 | true |
initial commit for MultiReQA for second PR | Since last PR #1349 had some issues passing the tests. So, a new PR is generated. | https://github.com/huggingface/datasets/pull/1396 | [
"Subsequent [PR #1426 ](https://github.com/huggingface/datasets/pull/1426) since this PR has uploaded other files along with the MultiReQA dataset.",
"closing this one since a new PR has been created"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1396",
"html_url": "https://github.com/huggingface/datasets/pull/1396",
"diff_url": "https://github.com/huggingface/datasets/pull/1396.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1396.patch",
"merged_at": null
} | 1,396 | true |
Add WikiSource Dataset | https://github.com/huggingface/datasets/pull/1395 | [
"@lhoestq fixed :) "
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1395",
"html_url": "https://github.com/huggingface/datasets/pull/1395",
"diff_url": "https://github.com/huggingface/datasets/pull/1395.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1395.patch",
"merged_at": "2020-12-14T10:24:13"
} | 1,395 | true | |
Add OfisPublik Dataset | https://github.com/huggingface/datasets/pull/1394 | [
"@lhoestq fixed :) "
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1394",
"html_url": "https://github.com/huggingface/datasets/pull/1394",
"diff_url": "https://github.com/huggingface/datasets/pull/1394.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1394.patch",
"merged_at": "2020-12-14T10:23:29"
} | 1,394 | true | |
Add script_version suggestion when dataset/metric not found | Adds a helpful prompt to the error message when a dataset/metric is not found, suggesting the user might need to pass `script_version="master"` if the dataset was added recently. The whole error looks like:
> Couldn't find file locally at blah/blah.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1/metrics/blah/blah.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/met
rics/blah/blah.py.
If the dataset was added recently, you may need to to pass script_version="master" to find the loading script on the master branch. | https://github.com/huggingface/datasets/pull/1393 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1393",
"html_url": "https://github.com/huggingface/datasets/pull/1393",
"diff_url": "https://github.com/huggingface/datasets/pull/1393.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1393.patch",
"merged_at": "2020-12-10T18:17:05"
} | 1,393 | true |
Add KDE4 Dataset | https://github.com/huggingface/datasets/pull/1392 | [
"@lhoestq fixed :) "
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1392",
"html_url": "https://github.com/huggingface/datasets/pull/1392",
"diff_url": "https://github.com/huggingface/datasets/pull/1392.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1392.patch",
"merged_at": "2020-12-14T10:22:32"
} | 1,392 | true | |
Add MultiParaCrawl Dataset | https://github.com/huggingface/datasets/pull/1391 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1391",
"html_url": "https://github.com/huggingface/datasets/pull/1391",
"diff_url": "https://github.com/huggingface/datasets/pull/1391.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1391.patch",
"merged_at": "2020-12-10T18:39:44"
} | 1,391 | true | |
Add SPC Dataset | https://github.com/huggingface/datasets/pull/1390 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1390",
"html_url": "https://github.com/huggingface/datasets/pull/1390",
"diff_url": "https://github.com/huggingface/datasets/pull/1390.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1390.patch",
"merged_at": "2020-12-14T11:13:52"
} | 1,390 | true | |
add amazon polarity dataset | This corresponds to the amazon (binary dataset) requested in https://github.com/huggingface/datasets/issues/353 | https://github.com/huggingface/datasets/pull/1389 | [
"`amazon_polarity` is probably a subset of `amazon_us_reviews` but I am not entirely sure about that.\r\nI guess `amazon_polarity` will help in reproducing results of papers using this dataset since even if it is a subset from `amazon_us_reviews`, it is not trivial how to extract `amazon_polarity` from `amazon_us_r... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1389",
"html_url": "https://github.com/huggingface/datasets/pull/1389",
"diff_url": "https://github.com/huggingface/datasets/pull/1389.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1389.patch",
"merged_at": "2020-12-11T11:41:01"
} | 1,389 | true |
hind_encorp | resubmit of hind_encorp file changes | https://github.com/huggingface/datasets/pull/1388 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1388",
"html_url": "https://github.com/huggingface/datasets/pull/1388",
"diff_url": "https://github.com/huggingface/datasets/pull/1388.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1388.patch",
"merged_at": null
} | 1,388 | true |
Add LIAR dataset | Add LIAR dataset from [“Liar, Liar Pants on Fire”: A New Benchmark Dataset for Fake News Detection](https://www.aclweb.org/anthology/P17-2067/). | https://github.com/huggingface/datasets/pull/1387 | [
"@lhoestq done! The failing testes don't seem to be related, it seems to be a connection issue, if I understand it correctly.",
"merging since the CI is fixed on master"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1387",
"html_url": "https://github.com/huggingface/datasets/pull/1387",
"diff_url": "https://github.com/huggingface/datasets/pull/1387.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1387.patch",
"merged_at": "2020-12-14T16:23:59"
} | 1,387 | true |
Add RecipeNLG Dataset (manual download) | https://github.com/huggingface/datasets/pull/1386 | [
"@lhoestq yes. I asked the authors for direct link but unfortunately we need to fill a form (captcha)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1386",
"html_url": "https://github.com/huggingface/datasets/pull/1386",
"diff_url": "https://github.com/huggingface/datasets/pull/1386.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1386.patch",
"merged_at": "2020-12-10T16:58:21"
} | 1,386 | true | |
add best2009 | `best2009` is a Thai word-tokenization dataset from encyclopedia, novels, news and articles by [NECTEC](https://www.nectec.or.th/) (148,995/2,252 lines of train/test). It was created for [BEST 2010: Word Tokenization Competition](https://thailang.nectec.or.th/archive/indexa290.html?q=node/10). The test set answers are not provided publicly. | https://github.com/huggingface/datasets/pull/1385 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1385",
"html_url": "https://github.com/huggingface/datasets/pull/1385",
"diff_url": "https://github.com/huggingface/datasets/pull/1385.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1385.patch",
"merged_at": "2020-12-14T10:59:08"
} | 1,385 | true |
Add News Commentary Dataset | https://github.com/huggingface/datasets/pull/1384 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1384",
"html_url": "https://github.com/huggingface/datasets/pull/1384",
"diff_url": "https://github.com/huggingface/datasets/pull/1384.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1384.patch",
"merged_at": "2020-12-10T16:54:07"
} | 1,384 | true | |
added conv ai 2 | Dataset : https://github.com/DeepPavlov/convai/tree/master/2018 | https://github.com/huggingface/datasets/pull/1383 | [
"@lhoestq Thank you for the suggestions. I added the changes to the branch and seems after rebasing it to master, all the commits previous commits got added. Should I create a new PR or should I keep this one only ? ",
"closing this one in favor of #1527 "
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1383",
"html_url": "https://github.com/huggingface/datasets/pull/1383",
"diff_url": "https://github.com/huggingface/datasets/pull/1383.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1383.patch",
"merged_at": null
} | 1,383 | true |
adding UNPC | Adding United Nations Parallel Corpus
http://opus.nlpl.eu/UNPC.php | https://github.com/huggingface/datasets/pull/1382 | [
"merging since the CI just had a connection error"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1382",
"html_url": "https://github.com/huggingface/datasets/pull/1382",
"diff_url": "https://github.com/huggingface/datasets/pull/1382.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1382.patch",
"merged_at": "2020-12-09T17:53:06"
} | 1,382 | true |
Add twi text c3 | Added Twi texts for training embeddings and language models based on the paper https://www.aclweb.org/anthology/2020.lrec-1.335/ | https://github.com/huggingface/datasets/pull/1381 | [
"looks like this PR includes changes about other datasets\r\n\r\nCan you only include the changes related to twi text c3 please ?",
"Hi @lhoestq , I have removed the unnecessary files. Can you please confirm?",
"You might need to either find a way to go back to the commit before it changes 389 files or create a... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1381",
"html_url": "https://github.com/huggingface/datasets/pull/1381",
"diff_url": "https://github.com/huggingface/datasets/pull/1381.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1381.patch",
"merged_at": null
} | 1,381 | true |
Add Tatoeba Dataset | https://github.com/huggingface/datasets/pull/1380 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1380",
"html_url": "https://github.com/huggingface/datasets/pull/1380",
"diff_url": "https://github.com/huggingface/datasets/pull/1380.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1380.patch",
"merged_at": "2020-12-10T16:54:27"
} | 1,380 | true | |
Add yoruba text c3 | Added Yoruba texts for training embeddings and language models based on the paper https://www.aclweb.org/anthology/2020.lrec-1.335/ | https://github.com/huggingface/datasets/pull/1379 | [
"looks like this PR includes changes about other datasets\r\n",
"Thanks for the review. I'm a bit confused how to remove the files. Every time I add a new branch name using the following commands:\r\n\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit checkout -b a-descriptive-name-for-my-changes\r\n\r\na... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1379",
"html_url": "https://github.com/huggingface/datasets/pull/1379",
"diff_url": "https://github.com/huggingface/datasets/pull/1379.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1379.patch",
"merged_at": "2020-12-13T18:37:32"
} | 1,379 | true |
Add FACTCK.BR dataset | This PR adds [FACTCK.BR](https://github.com/jghm-f/FACTCK.BR) dataset from [FACTCK.BR: a new dataset to study fake news](https://dl.acm.org/doi/10.1145/3323503.3361698). | https://github.com/huggingface/datasets/pull/1378 | [
"@lhoestq done!",
"merging since the CI is fixed on master"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1378",
"html_url": "https://github.com/huggingface/datasets/pull/1378",
"diff_url": "https://github.com/huggingface/datasets/pull/1378.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1378.patch",
"merged_at": "2020-12-15T15:34:11"
} | 1,378 | true |
adding marathi-wiki dataset | Adding marathi-wiki-articles dataset. | https://github.com/huggingface/datasets/pull/1377 | [
"Can you make it a draft PR until you've added the dataset please ? @ekdnam ",
"Done",
"Thanks for your contribution, @ekdnam. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/dataset... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1377",
"html_url": "https://github.com/huggingface/datasets/pull/1377",
"diff_url": "https://github.com/huggingface/datasets/pull/1377.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1377.patch",
"merged_at": null
} | 1,377 | true |
Add SETimes Dataset | https://github.com/huggingface/datasets/pull/1376 | [
"merging since the CI is fixed on master"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1376",
"html_url": "https://github.com/huggingface/datasets/pull/1376",
"diff_url": "https://github.com/huggingface/datasets/pull/1376.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1376.patch",
"merged_at": "2020-12-10T16:11:56"
} | 1,376 | true | |
Add OPUS EMEA Dataset | https://github.com/huggingface/datasets/pull/1375 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1375",
"html_url": "https://github.com/huggingface/datasets/pull/1375",
"diff_url": "https://github.com/huggingface/datasets/pull/1375.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1375.patch",
"merged_at": "2020-12-10T16:11:08"
} | 1,375 | true | |
Add OPUS Tilde Model Dataset | https://github.com/huggingface/datasets/pull/1374 | [
"merging since the CI is fixed on master"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1374",
"html_url": "https://github.com/huggingface/datasets/pull/1374",
"diff_url": "https://github.com/huggingface/datasets/pull/1374.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1374.patch",
"merged_at": "2020-12-10T16:11:28"
} | 1,374 | true | |
Add OPUS ECB Dataset | https://github.com/huggingface/datasets/pull/1373 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1373",
"html_url": "https://github.com/huggingface/datasets/pull/1373",
"diff_url": "https://github.com/huggingface/datasets/pull/1373.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1373.patch",
"merged_at": "2020-12-10T15:25:54"
} | 1,373 | true | |
Add OPUS Books Dataset | https://github.com/huggingface/datasets/pull/1372 | [
"@lhoestq done"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1372",
"html_url": "https://github.com/huggingface/datasets/pull/1372",
"diff_url": "https://github.com/huggingface/datasets/pull/1372.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1372.patch",
"merged_at": "2020-12-14T09:56:27"
} | 1,372 | true | |
Adding Scielo | Adding Scielo: Parallel corpus of full-text articles in Portuguese, English and Spanish from SciELO
https://sites.google.com/view/felipe-soares/datasets#h.p_92uSCyAjWSRB | https://github.com/huggingface/datasets/pull/1371 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1371",
"html_url": "https://github.com/huggingface/datasets/pull/1371",
"diff_url": "https://github.com/huggingface/datasets/pull/1371.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1371.patch",
"merged_at": "2020-12-09T17:53:37"
} | 1,371 | true |
Add OPUS PHP Dataset | https://github.com/huggingface/datasets/pull/1370 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1370",
"html_url": "https://github.com/huggingface/datasets/pull/1370",
"diff_url": "https://github.com/huggingface/datasets/pull/1370.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1370.patch",
"merged_at": "2020-12-10T15:37:24"
} | 1,370 | true | |
Use passed --cache_dir for modules cache | When passed `--cache_dir` arg:
```shell
python datasets-cli test datasets/<my-dataset-folder> --save_infos --all_configs --cache_dir <my-cache-dir>
```
it is not used for caching the modules, which are cached in the default location at `.cache/huggingface/modules`.
With this fix, the modules will be cached at `<my-cache-dir>/modules`. | https://github.com/huggingface/datasets/pull/1369 | [
"I have a question: why not using a tmp dir instead, like the DummyDataGeneratorDownloadManager does?",
"Hi @lhoestq, I am trying to understand better the logic...\r\n\r\nWhy do we have a `dynamic_module_path` besides the modules cache path?\r\n```python\r\nDYNAMIC_MODULES_PATH = os.path.join(HF_MODULES_CACHE, \"... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1369",
"html_url": "https://github.com/huggingface/datasets/pull/1369",
"diff_url": "https://github.com/huggingface/datasets/pull/1369.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1369.patch",
"merged_at": null
} | 1,369 | true |
Re-adding narrativeqa dataset | An update of #309. | https://github.com/huggingface/datasets/pull/1368 | [
"@lhoestq I think I've fixed the dummy data - it finally passes! I'll add the model card now.",
"@lhoestq - pretty happy with it now",
"> Awesome thank you !\r\n> \r\n> Could you try to reduce the size of the dummy_data.zip file before we merge ? (it's 300KB right now)\r\n> \r\n> To do so feel free to take a lo... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1368",
"html_url": "https://github.com/huggingface/datasets/pull/1368",
"diff_url": "https://github.com/huggingface/datasets/pull/1368.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1368.patch",
"merged_at": null
} | 1,368 | true |
adding covid-tweets-japanese | Adding COVID-19 Japanese Tweets Dataset as part of the sprint.
Testing with dummy data is not working (the file is said to not exist). Sorry for the incomplete PR. | https://github.com/huggingface/datasets/pull/1367 | [
"I think it's because the file you download uncompresses into a file and not a folder so `--autogenerate` couldn't create dummy data for you. See in your dummy_data.zip if there is a file there. If not, manually create your dummy data and compress them to dummy_data.zip.",
"@cstorm125 Thank you for the comment! \... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1367",
"html_url": "https://github.com/huggingface/datasets/pull/1367",
"diff_url": "https://github.com/huggingface/datasets/pull/1367.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1367.patch",
"merged_at": null
} | 1,367 | true |
Adding Hope EDI dataset | https://github.com/huggingface/datasets/pull/1366 | [
"@lhoestq Have addressed your comments. Please review. Thanks."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1366",
"html_url": "https://github.com/huggingface/datasets/pull/1366",
"diff_url": "https://github.com/huggingface/datasets/pull/1366.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1366.patch",
"merged_at": "2020-12-14T14:27:57"
} | 1,366 | true | |
Add Mkqa dataset | # MKQA: Multilingual Knowledge Questions & Answers Dataset
Adding the [MKQA](https://github.com/apple/ml-mkqa) dataset as part of the sprint 🎉
There is no official data splits so I added just a `train` split.
differently from the original:
- answer:type field is a ClassLabel (I thought it might be possible to train on this as a label for categorizing questions)
- answer:entity field has a default value of empty string '' (since this key is not available for all in original)
- answer:alias has default value of []
- [x] All tests passed
- [x] Added dummy data
- [x] Added data card (as much as I could)
| https://github.com/huggingface/datasets/pull/1365 | [
"the `RemoteDatasetTest ` error pf the CI is fixed on master so it's fine",
"merging since the CI is fixed on master"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1365",
"html_url": "https://github.com/huggingface/datasets/pull/1365",
"diff_url": "https://github.com/huggingface/datasets/pull/1365.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1365.patch",
"merged_at": "2020-12-10T15:37:56"
} | 1,365 | true |
Narrative QA (Manual Download Stories) Dataset | Narrative QA with manual download for stories. | https://github.com/huggingface/datasets/pull/1364 | [
"Hi ! Maybe we can rename it `narrativeqa_manual` to make it explicit that this one requires manual download contrary to `narrativeqa` ?\r\nIt's important to have this one as well, in case the `narrativeqa` one suffers from download issues (checksums or dead links for example).\r\n\r\nYou can also copy the dataset ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1364",
"html_url": "https://github.com/huggingface/datasets/pull/1364",
"diff_url": "https://github.com/huggingface/datasets/pull/1364.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1364.patch",
"merged_at": null
} | 1,364 | true |
Adding OPUS MultiUN | Adding UnMulti
http://www.euromatrixplus.net/multi-un/ | https://github.com/huggingface/datasets/pull/1363 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1363",
"html_url": "https://github.com/huggingface/datasets/pull/1363",
"diff_url": "https://github.com/huggingface/datasets/pull/1363.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1363.patch",
"merged_at": "2020-12-09T17:54:19"
} | 1,363 | true |
adding opus_infopankki | Adding opus_infopankki
http://opus.nlpl.eu/infopankki-v1.php | https://github.com/huggingface/datasets/pull/1362 | [
"Thanks Quentin !"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1362",
"html_url": "https://github.com/huggingface/datasets/pull/1362",
"diff_url": "https://github.com/huggingface/datasets/pull/1362.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1362.patch",
"merged_at": "2020-12-09T18:13:48"
} | 1,362 | true |
adding bprec | Brand-Product Relation Extraction Corpora in Polish | https://github.com/huggingface/datasets/pull/1361 | [
"@lhoestq I think this is ready for review, I assume the errors (connection) are unrelated to the PR :) ",
"merging since the CI is fixed on master"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1361",
"html_url": "https://github.com/huggingface/datasets/pull/1361",
"diff_url": "https://github.com/huggingface/datasets/pull/1361.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1361.patch",
"merged_at": "2020-12-16T17:04:44"
} | 1,361 | true |
add wisesight1000 | `wisesight1000` contains Thai social media texts randomly drawn from the full `wisesight-sentiment`, tokenized by human annotators. Out of the labels `neg` (negative), `neu` (neutral), `pos` (positive), `q` (question), 250 samples each. Some texts are removed because they look like spam.Because these samples are representative of real world content, we believe having these annotaed samples will allow the community to robustly evaluate tokenization algorithms. | https://github.com/huggingface/datasets/pull/1360 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1360",
"html_url": "https://github.com/huggingface/datasets/pull/1360",
"diff_url": "https://github.com/huggingface/datasets/pull/1360.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1360.patch",
"merged_at": "2020-12-10T14:28:41"
} | 1,360 | true |
Add JNLPBA | https://github.com/huggingface/datasets/pull/1359 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1359",
"html_url": "https://github.com/huggingface/datasets/pull/1359",
"diff_url": "https://github.com/huggingface/datasets/pull/1359.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1359.patch",
"merged_at": "2020-12-10T14:24:36"
} | 1,359 | true | |
Add spider dataset | This PR adds the Spider dataset, a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students. The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases.
Dataset website: https://yale-lily.github.io/spider
Paper link: https://www.aclweb.org/anthology/D18-1425/ | https://github.com/huggingface/datasets/pull/1358 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1358",
"html_url": "https://github.com/huggingface/datasets/pull/1358",
"diff_url": "https://github.com/huggingface/datasets/pull/1358.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1358.patch",
"merged_at": "2020-12-10T15:12:31"
} | 1,358 | true |
Youtube caption corrections | This PR adds a new dataset of YouTube captions, error and corrections. This dataset was created in just the last week, as inspired by this sprint! | https://github.com/huggingface/datasets/pull/1357 | [
"Sorry about forgetting flake8.\r\nRather than use up the circleci resources on a new push with only formatting changes, I will wait to push until the results from all tests finish and/or any feedback comes in... probably tomorrow for me.",
"\r\nSo... my normal work is with mercurial and seem to have clearly fork... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1357",
"html_url": "https://github.com/huggingface/datasets/pull/1357",
"diff_url": "https://github.com/huggingface/datasets/pull/1357.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1357.patch",
"merged_at": "2020-12-15T18:12:56"
} | 1,357 | true |
Add StackOverflow StackSample dataset | This PR adds the StackOverflow StackSample dataset from Kaggle: https://www.kaggle.com/stackoverflow/stacksample
Ran through all of the steps. However, since my dataset requires manually downloading the data, I was unable to run the pytest on the real dataset (the dummy data pytest passed). | https://github.com/huggingface/datasets/pull/1356 | [
"@lhoestq Thanks for the review and suggestions! I've added your comments and pushed the changes. I'm having issues with the dummy data still. When I run the dummy data test\r\n\r\n```bash\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_so_stacksample\r\n```\r\nI g... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1356",
"html_url": "https://github.com/huggingface/datasets/pull/1356",
"diff_url": "https://github.com/huggingface/datasets/pull/1356.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1356.patch",
"merged_at": "2020-12-21T14:48:21"
} | 1,356 | true |
Addition of py_ast dataset | @lhoestq as discussed in PR #1195 | https://github.com/huggingface/datasets/pull/1355 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1355",
"html_url": "https://github.com/huggingface/datasets/pull/1355",
"diff_url": "https://github.com/huggingface/datasets/pull/1355.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1355.patch",
"merged_at": "2020-12-09T16:19:48"
} | 1,355 | true |
Add TweetQA dataset | This PR adds the TweetQA dataset, the first dataset for QA on social media data by leveraging news media and crowdsourcing.
Paper: https://arxiv.org/abs/1907.06292
Repository: https://tweetqa.github.io/ | https://github.com/huggingface/datasets/pull/1354 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1354",
"html_url": "https://github.com/huggingface/datasets/pull/1354",
"diff_url": "https://github.com/huggingface/datasets/pull/1354.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1354.patch",
"merged_at": "2020-12-10T15:10:30"
} | 1,354 | true |
New instruction for how to generate dataset_infos.json | Add additional instructions for how to generate dataset_infos.json for manual download datasets. Information courtesy of `Taimur Ibrahim` from the slack channel | https://github.com/huggingface/datasets/pull/1353 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1353",
"html_url": "https://github.com/huggingface/datasets/pull/1353",
"diff_url": "https://github.com/huggingface/datasets/pull/1353.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1353.patch",
"merged_at": "2020-12-10T13:45:15"
} | 1,353 | true |
change url for prachathai67k to internet archive | `prachathai67k` is currently downloaded from git-lfs of PyThaiNLP github. Since the size is quite large (~250MB), I moved the URL to archive.org in order to prevent rate limit issues. | https://github.com/huggingface/datasets/pull/1352 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1352",
"html_url": "https://github.com/huggingface/datasets/pull/1352",
"diff_url": "https://github.com/huggingface/datasets/pull/1352.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1352.patch",
"merged_at": "2020-12-10T13:42:17"
} | 1,352 | true |
added craigslist_bargians | `craigslist_bargains` data set from [here](https://worksheets.codalab.org/worksheets/0x453913e76b65495d8b9730d41c7e0a0c/)
(Cleaned up version of #1278) | https://github.com/huggingface/datasets/pull/1351 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1351",
"html_url": "https://github.com/huggingface/datasets/pull/1351",
"diff_url": "https://github.com/huggingface/datasets/pull/1351.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1351.patch",
"merged_at": "2020-12-10T14:14:34"
} | 1,351 | true |
add LeNER-Br dataset | Adding the LeNER-Br dataset, a Portuguese language dataset for named entity recognition | https://github.com/huggingface/datasets/pull/1350 | [
"I don't know what happened, my first commit passed on all checks, but after just a README.md update one of the scripts failed, is it normal? 😕 ",
"Looks like a flaky connection error, I've launched a re-run, it should be fine :)",
"The RemoteDatasetTest error in the CI is just a connection error, we can ignor... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1350",
"html_url": "https://github.com/huggingface/datasets/pull/1350",
"diff_url": "https://github.com/huggingface/datasets/pull/1350.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1350.patch",
"merged_at": "2020-12-10T14:11:33"
} | 1,350 | true |
initial commit for MultiReQA | Added MultiReQA, which is a dataset containing the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. | https://github.com/huggingface/datasets/pull/1349 | [
"looks like this dataset includes changes about many other files than the ones for multi_re_qa\r\n\r\nCan you create another branch and another PR please ?",
"> looks like this dataset includes changes about many other files than the ones for multi_re_qa\r\n> \r\n> Can you create another branch and another PR ple... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1349",
"html_url": "https://github.com/huggingface/datasets/pull/1349",
"diff_url": "https://github.com/huggingface/datasets/pull/1349.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1349.patch",
"merged_at": null
} | 1,349 | true |
add Yoruba NER dataset | Added Yoruba GV dataset based on this paper | https://github.com/huggingface/datasets/pull/1348 | [
"Thank you. Okay, other pull requests only have one dataset",
"The `RemoteDatasetTest` error in the CI is just a connection error, we can ignore it",
"merging since the CI is fixed on master",
"Thank you very much"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1348",
"html_url": "https://github.com/huggingface/datasets/pull/1348",
"diff_url": "https://github.com/huggingface/datasets/pull/1348.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1348.patch",
"merged_at": "2020-12-10T14:09:43"
} | 1,348 | true |
Add spanish billion words corpus | Add an unannotated Spanish corpus of nearly 1.5 billion words, compiled from different resources from the web. | https://github.com/huggingface/datasets/pull/1347 | [
"Thank you for your feedback! I've reduced the dummy data size to 2KB.\r\n\r\nI had to rebase to fix `RemoteDatasetTest` fails, sorry about the 80 commits. \r\nI could create a new clean PR if you prefer.",
"I have seen that in similar cases you have suggested to other contributors to create another branch and an... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1347",
"html_url": "https://github.com/huggingface/datasets/pull/1347",
"diff_url": "https://github.com/huggingface/datasets/pull/1347.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1347.patch",
"merged_at": null
} | 1,347 | true |
Add MultiBooked dataset | Add dataset. | https://github.com/huggingface/datasets/pull/1346 | [
"There' still an issue with the dummy data, let me take a look"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1346",
"html_url": "https://github.com/huggingface/datasets/pull/1346",
"diff_url": "https://github.com/huggingface/datasets/pull/1346.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1346.patch",
"merged_at": "2020-12-15T17:02:08"
} | 1,346 | true |
First commit of NarrativeQA Dataset | Added NarrativeQA dataset and included a manual downloading option to download scripts from the original scripts provided by the authors. | https://github.com/huggingface/datasets/pull/1345 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1345",
"html_url": "https://github.com/huggingface/datasets/pull/1345",
"diff_url": "https://github.com/huggingface/datasets/pull/1345.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1345.patch",
"merged_at": null
} | 1,345 | true |
Add hausa ner corpus | Added Hausa VOA NER data | https://github.com/huggingface/datasets/pull/1344 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1344",
"html_url": "https://github.com/huggingface/datasets/pull/1344",
"diff_url": "https://github.com/huggingface/datasets/pull/1344.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1344.patch",
"merged_at": null
} | 1,344 | true |
Add LiveQA | This PR adds LiveQA, the Chinese real-time/timeline-based QA task by [Liu et al., 2020](https://arxiv.org/pdf/2010.00526.pdf). | https://github.com/huggingface/datasets/pull/1343 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1343",
"html_url": "https://github.com/huggingface/datasets/pull/1343",
"diff_url": "https://github.com/huggingface/datasets/pull/1343.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1343.patch",
"merged_at": "2020-12-14T09:40:28"
} | 1,343 | true |
[yaml] Fix metadata according to pre-specified scheme | @lhoestq @yjernite | https://github.com/huggingface/datasets/pull/1342 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1342",
"html_url": "https://github.com/huggingface/datasets/pull/1342",
"diff_url": "https://github.com/huggingface/datasets/pull/1342.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1342.patch",
"merged_at": "2020-12-09T15:37:26"
} | 1,342 | true |
added references to only data card creator to all guides | We can now use the wonderful online form for dataset cards created by @evrardts | https://github.com/huggingface/datasets/pull/1341 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1341",
"html_url": "https://github.com/huggingface/datasets/pull/1341",
"diff_url": "https://github.com/huggingface/datasets/pull/1341.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1341.patch",
"merged_at": "2020-12-08T21:36:11"
} | 1,341 | true |
:fist: ¡Viva la Independencia! | Adds the Catalonia Independence Corpus for stance-detection of Tweets.
Ready for review! | https://github.com/huggingface/datasets/pull/1340 | [
"I've added the changes / fixes - ready for a second pass :)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1340",
"html_url": "https://github.com/huggingface/datasets/pull/1340",
"diff_url": "https://github.com/huggingface/datasets/pull/1340.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1340.patch",
"merged_at": "2020-12-14T10:36:01"
} | 1,340 | true |
hate_speech_18 initial commit | https://github.com/huggingface/datasets/pull/1339 | [
"> Nice thanks !\r\n> \r\n> Can you rename the dataset folder and the dataset script name `hate_speech18` instead of `hate_speech_18` to follow the snake case convention we're using ?\r\n> \r\n> Also it looks like the dummy_data.zip file is quite big (almost 4MB).\r\n> Can you try to reduce its size ?\r\n> \r\n> To... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1339",
"html_url": "https://github.com/huggingface/datasets/pull/1339",
"diff_url": "https://github.com/huggingface/datasets/pull/1339.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1339.patch",
"merged_at": null
} | 1,339 | true | |
Add GigaFren Dataset | https://github.com/huggingface/datasets/pull/1338 | [
"@lhoestq fixed"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1338",
"html_url": "https://github.com/huggingface/datasets/pull/1338",
"diff_url": "https://github.com/huggingface/datasets/pull/1338.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1338.patch",
"merged_at": "2020-12-14T10:03:46"
} | 1,338 | true | |
Add spanish billion words | Add an unannotated corpus of the Spanish language of nearly 1.5 billion words, compiled from different resources from the web.
The dataset needs 10 GB (download: 1.89 GiB, generated: 8.34 GiB, post-processed: Unknown size, total: 10.22 GiB), the test using dummy data pass but my laptop isn't able to run it on the real data (I left it running for over 8 hours and it didn't finish). | https://github.com/huggingface/datasets/pull/1337 | [
"The tests failed because of ```RemoteDatasetTest``` so I tried ```git rebase``` and messed everything up. I've made a new clean PR (#1347)."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1337",
"html_url": "https://github.com/huggingface/datasets/pull/1337",
"diff_url": "https://github.com/huggingface/datasets/pull/1337.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1337.patch",
"merged_at": null
} | 1,337 | true |
Add dataset Yoruba BBC Topic Classification | Added new dataset Yoruba BBC Topic Classification
Contains loading script as well as dataset card including YAML tags. | https://github.com/huggingface/datasets/pull/1336 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1336",
"html_url": "https://github.com/huggingface/datasets/pull/1336",
"diff_url": "https://github.com/huggingface/datasets/pull/1336.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1336.patch",
"merged_at": "2020-12-10T11:27:41"
} | 1,336 | true |
Added Bianet dataset | Hi :hugs:, This is a PR for [Bianet: A parallel news corpus in Turkish, Kurdish and English; Source](http://opus.nlpl.eu/Bianet.php) dataset | https://github.com/huggingface/datasets/pull/1335 | [
"merging since the Ci is fixed on master"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1335",
"html_url": "https://github.com/huggingface/datasets/pull/1335",
"diff_url": "https://github.com/huggingface/datasets/pull/1335.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1335.patch",
"merged_at": "2020-12-14T10:00:55"
} | 1,335 | true |
Add QED Amara Dataset | https://github.com/huggingface/datasets/pull/1334 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1334",
"html_url": "https://github.com/huggingface/datasets/pull/1334",
"diff_url": "https://github.com/huggingface/datasets/pull/1334.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1334.patch",
"merged_at": "2020-12-10T11:15:57"
} | 1,334 | true | |
Add Tanzil Dataset | https://github.com/huggingface/datasets/pull/1333 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1333",
"html_url": "https://github.com/huggingface/datasets/pull/1333",
"diff_url": "https://github.com/huggingface/datasets/pull/1333.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1333.patch",
"merged_at": "2020-12-10T11:14:43"
} | 1,333 | true | |
Add Open Subtitles Dataset | https://github.com/huggingface/datasets/pull/1332 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1332",
"html_url": "https://github.com/huggingface/datasets/pull/1332",
"diff_url": "https://github.com/huggingface/datasets/pull/1332.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1332.patch",
"merged_at": "2020-12-10T11:13:18"
} | 1,332 | true | |
First version of the new dataset hausa_voa_topics | Contains loading script as well as dataset card including YAML tags.
| https://github.com/huggingface/datasets/pull/1331 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1331",
"html_url": "https://github.com/huggingface/datasets/pull/1331",
"diff_url": "https://github.com/huggingface/datasets/pull/1331.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1331.patch",
"merged_at": "2020-12-10T11:09:53"
} | 1,331 | true |
added un_ga dataset | Hi :hugs:, This is a PR for [United nations general assembly resolutions: A six-language parallel corpus](http://opus.nlpl.eu/UN.php) dataset | https://github.com/huggingface/datasets/pull/1330 | [
"Looks like this PR includes changes about many other files than the ones for un_ga\r\n\r\nCan you create another branch an another PR please ?",
"@lhoestq, Thank you for suggestions. I have made the changes and raised the new PR https://github.com/huggingface/datasets/pull/1569. "
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1330",
"html_url": "https://github.com/huggingface/datasets/pull/1330",
"diff_url": "https://github.com/huggingface/datasets/pull/1330.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1330.patch",
"merged_at": null
} | 1,330 | true |
Add yoruba ner corpus | https://github.com/huggingface/datasets/pull/1329 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1329",
"html_url": "https://github.com/huggingface/datasets/pull/1329",
"diff_url": "https://github.com/huggingface/datasets/pull/1329.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1329.patch",
"merged_at": null
} | 1,329 | true | |
Added the NewsPH Raw dataset and corresponding dataset card | This PR adds the original NewsPH dataset which is used to autogenerate the NewsPH-NLI dataset. Reopened a new PR as the previous one had problems.
Paper: https://arxiv.org/abs/2010.11574
Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks | https://github.com/huggingface/datasets/pull/1328 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1328",
"html_url": "https://github.com/huggingface/datasets/pull/1328",
"diff_url": "https://github.com/huggingface/datasets/pull/1328.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1328.patch",
"merged_at": "2020-12-10T11:04:34"
} | 1,328 | true |
Add msr_genomics_kbcomp dataset | https://github.com/huggingface/datasets/pull/1327 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1327",
"html_url": "https://github.com/huggingface/datasets/pull/1327",
"diff_url": "https://github.com/huggingface/datasets/pull/1327.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1327.patch",
"merged_at": "2020-12-08T18:18:06"
} | 1,327 | true | |
TEP: Tehran English-Persian parallel corpus | TEP: Tehran English-Persian parallel corpus
more info : http://opus.nlpl.eu/TEP.php | https://github.com/huggingface/datasets/pull/1326 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1326",
"html_url": "https://github.com/huggingface/datasets/pull/1326",
"diff_url": "https://github.com/huggingface/datasets/pull/1326.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1326.patch",
"merged_at": "2020-12-10T11:25:17"
} | 1,326 | true |
Add humicroedit dataset | Pull request for adding humicroedit dataset | https://github.com/huggingface/datasets/pull/1325 | [
"Updated the commit with the generated yaml tags",
"merging since the CI is fixed on master"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1325",
"html_url": "https://github.com/huggingface/datasets/pull/1325",
"diff_url": "https://github.com/huggingface/datasets/pull/1325.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1325.patch",
"merged_at": "2020-12-17T17:59:09"
} | 1,325 | true |
❓ Sharing ElasticSearch indexed dataset | Hi there,
First of all, thank you very much for this amazing library. Datasets have become my preferred data structure for basically everything I am currently doing.
**Question:** I'm working with a dataset and I have an elasticsearch container running at localhost:9200. I added an elasticsearch index and I was wondering
- how can I know where it has been saved?
- how can I share the indexed dataset with others?
I tried to dig into the docs, but could not find anything about that.
Thank you very much for your help.
Best,
Pietro
Edit: apologies for the wrong label | https://github.com/huggingface/datasets/issues/1324 | [
"Hello @pietrolesci , I am not sure to understand what you are trying to do here.\r\n\r\nIf you're looking for ways to save a dataset on disk, you can you the `save_to_disk` method:\r\n```python\r\n>>> import datasets\r\n>>> loaded_dataset = datasets.load(\"dataset_name\")\r\n>>> loaded_dataset.save_to_disk(\"/path... | null | 1,324 | false |
Add CC-News dataset of English language articles | Adds [CC-News](https://commoncrawl.org/2016/10/news-dataset-available/) dataset. It contains 708241 English language news articles. Although each article has a language field these tags are not reliable. I've used Spacy language detection [pipeline](https://spacy.io/universe/project/spacy-langdetect) to confirm that the article language is indeed English.
The prepared dataset is temporarily hosted on my private Google Storage [bucket](https://storage.googleapis.com/hf_datasets/cc_news.tar.gz). We can move it to HF storage and update this PR before merging. | https://github.com/huggingface/datasets/pull/1323 | [
"@vblagoje nice work, please add the README.md file and it would be ready",
"@lhoestq @tanmoyio @yjernite please have a look at the dataset card. Don't forget that the dataset is still hosted on my private gs bucket and should eventually be moved to the HF bucket",
"I will move the files soon and ping you when ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1323",
"html_url": "https://github.com/huggingface/datasets/pull/1323",
"diff_url": "https://github.com/huggingface/datasets/pull/1323.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1323.patch",
"merged_at": "2021-02-01T16:55:49"
} | 1,323 | true |
add indonlu benchmark datasets | The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for the Indonesian language. There are 12 datasets in IndoNLU. | https://github.com/huggingface/datasets/pull/1322 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1322",
"html_url": "https://github.com/huggingface/datasets/pull/1322",
"diff_url": "https://github.com/huggingface/datasets/pull/1322.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1322.patch",
"merged_at": null
} | 1,322 | true |
added dutch_social | The Dutch social media tweets dataset. Which has a total of more than 210k tweets in dutch language. These tweets have been machine annotated with sentiment scores (`label` feature) and `industry` and `hisco_codes`
It can be used for sentiment analysis, multi-label classification and entity tagging | https://github.com/huggingface/datasets/pull/1321 | [
"@lhoestq \r\nUpdated the `dummy_data.zip `(<10kb)I had to reduce it to just a few samples. \r\nTrain-Test-Dev (20-5-5 samples) \r\n\r\nBut the push also added changes from other PRs (probably because of a rebase!) So the files changed tab shows 466 files were changed! \r\n",
"Thanks ! The dummy data are all go... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1321",
"html_url": "https://github.com/huggingface/datasets/pull/1321",
"diff_url": "https://github.com/huggingface/datasets/pull/1321.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1321.patch",
"merged_at": "2020-12-16T10:14:17"
} | 1,321 | true |
Added the WikiText-TL39 dataset and corresponding card | This PR adds the WikiText-TL-39 Filipino Language Modeling dataset. Restarted a new pull request since there were problems with the earlier one.
Paper: https://arxiv.org/abs/1907.00409
Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks | https://github.com/huggingface/datasets/pull/1320 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1320",
"html_url": "https://github.com/huggingface/datasets/pull/1320",
"diff_url": "https://github.com/huggingface/datasets/pull/1320.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1320.patch",
"merged_at": "2020-12-10T11:24:52"
} | 1,320 | true |
adding wili-2018 language identification dataset | https://github.com/huggingface/datasets/pull/1319 | [
"@lhoestq Not sure what happened, I just changed the py file but it is showing some TensorFlow error now.",
"You can ignore it.\r\nIt's caused by the Tensorflow update that happened 30min ago. They added breaking changes.\r\nI'm working on a fix on the master branch right now\r\n",
"oh okay, btw I have made the... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1319",
"html_url": "https://github.com/huggingface/datasets/pull/1319",
"diff_url": "https://github.com/huggingface/datasets/pull/1319.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1319.patch",
"merged_at": "2020-12-14T21:20:32"
} | 1,319 | true | |
ethos first commit | Ethos passed all the tests except from this one:
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<your-dataset-name>
with this error:
E OSError: Cannot find data file.
E Original error:
E [Errno 2] No such file or directory: | https://github.com/huggingface/datasets/pull/1318 | [
"> Nice thanks !\r\n> \r\n> I left a few comments\r\n> \r\n> Also it looks like this PR includes changes about other files than the ones for ethos\r\n> \r\n> Can you create another branch and another PR please ?\r\n\r\n@lhoestq Should I close this PR? The new one is the: #1453",
"You can create another PR and clo... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1318",
"html_url": "https://github.com/huggingface/datasets/pull/1318",
"diff_url": "https://github.com/huggingface/datasets/pull/1318.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1318.patch",
"merged_at": null
} | 1,318 | true |
add 10k German News Article Dataset | https://github.com/huggingface/datasets/pull/1317 | [
"You can just create another branch from master on your fork and create another PR:\r\n\r\nfirst update your master branch\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit push\r\n```\r\n\r\nthen create a new branch\r\n```\r\ngit checkout -b my-new-branch-name\r\n```\r\n\r\nT... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1317",
"html_url": "https://github.com/huggingface/datasets/pull/1317",
"diff_url": "https://github.com/huggingface/datasets/pull/1317.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1317.patch",
"merged_at": null
} | 1,317 | true | |
Allow GitHub releases as dataset source | # Summary
Providing a GitHub release URL to `DownloadManager.download()` currently throws a `ConnectionError: Couldn't reach [DOWNLOAD_URL]`. This PR fixes this problem by adding an exception for GitHub releases in `datasets.utils.file_utils.get_from_cache()`.
# Reproduce
```
import datasets
url = 'http://github.com/benjaminvdb/DBRD/releases/download/v3.0/DBRD_v3.tgz'
result = datasets.utils.file_utils.get_from_cache(url)
# Returns: ConnectionError: Couldn't reach http://github.com/benjaminvdb/DBRD/releases/download/v3.0/DBRD_v3.tgz
```
# Cause
GitHub releases returns a HTTP status 403 (FOUND), indicating that the request is being redirected (to AWS S3, in this case). `get_from_cache()` checks whether the status is 200 (OK) or if it is part of two exceptions (Google Drive or Firebase), otherwise the mentioned error is thrown.
# Solution
Just like the exceptions for Google Drive and Firebase, add a condition for GitHub releases URLs that return the HTTP status 403. If this is the case, continue normally. | https://github.com/huggingface/datasets/pull/1316 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1316",
"html_url": "https://github.com/huggingface/datasets/pull/1316",
"diff_url": "https://github.com/huggingface/datasets/pull/1316.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1316.patch",
"merged_at": "2020-12-10T10:12:00"
} | 1,316 | true |
add yelp_review_full | This corresponds to the Yelp-5 requested in https://github.com/huggingface/datasets/issues/353
I included the dataset card. | https://github.com/huggingface/datasets/pull/1315 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1315",
"html_url": "https://github.com/huggingface/datasets/pull/1315",
"diff_url": "https://github.com/huggingface/datasets/pull/1315.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1315.patch",
"merged_at": "2020-12-09T15:55:48"
} | 1,315 | true |
Add snips built in intents 2016 12 | This PR proposes to add the Snips.ai built in intents dataset. The first configuration added is for the intent labels only, but the dataset includes entity slots that may in future be added as alternate configurations. | https://github.com/huggingface/datasets/pull/1314 | [
"It is not clear how to automatically add the dummy data if the source data is a more complex json format. Should I manually take a fraction of the source data and include it as dummy data?\r\n",
"Added a fraction of the real data as dummy data.",
"merging since the CI is fixed on master"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1314",
"html_url": "https://github.com/huggingface/datasets/pull/1314",
"diff_url": "https://github.com/huggingface/datasets/pull/1314.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1314.patch",
"merged_at": "2020-12-14T09:59:06"
} | 1,314 | true |
Add HateSpeech Corpus for Polish | This PR adds a HateSpeech Corpus for Polish, containing offensive language examples.
- **Homepage:** http://zil.ipipan.waw.pl/HateSpeech
- **Paper:** http://www.qualitativesociologyreview.org/PL/Volume38/PSJ_13_2_Troszynski_Wawer.pdf | https://github.com/huggingface/datasets/pull/1313 | [
"@lhoestq Do you think using the ClassLabel is correct if we don't know the meaning of them?",
"Once we find out the meanings we can still add them to the dataset card",
"Feel free to ping me when the PR is ready for the final review"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1313",
"html_url": "https://github.com/huggingface/datasets/pull/1313",
"diff_url": "https://github.com/huggingface/datasets/pull/1313.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1313.patch",
"merged_at": "2020-12-16T16:48:45"
} | 1,313 | true |
Jigsaw toxicity pred | Requires manually downloading data from Kaggle. | https://github.com/huggingface/datasets/pull/1312 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1312",
"html_url": "https://github.com/huggingface/datasets/pull/1312",
"diff_url": "https://github.com/huggingface/datasets/pull/1312.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1312.patch",
"merged_at": null
} | 1,312 | true |
Add OPUS Bible Corpus (102 Languages) | https://github.com/huggingface/datasets/pull/1311 | [
"@lhoestq done"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1311",
"html_url": "https://github.com/huggingface/datasets/pull/1311",
"diff_url": "https://github.com/huggingface/datasets/pull/1311.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1311.patch",
"merged_at": "2020-12-09T15:30:56"
} | 1,311 | true | |
Add OffensEval-TR 2020 Dataset | This PR adds the OffensEval-TR 2020 dataset which is a Turkish offensive language corpus by me and @basakbuluz. The corpus consist of randomly sampled tweets and annotated in a similar way to [OffensEval](https://sites.google.com/site/offensevalsharedtask/) and [GermEval](https://projects.fzai.h-da.de/iggsa/).
- **Homepage:** [offensive-turkish](https://coltekin.github.io/offensive-turkish/)
- **Paper:** [A Corpus of Turkish Offensive Language on Social Media](https://coltekin.github.io/offensive-turkish/troff.pdf)
- **Point of Contact:** [Çağrı Çöltekin](ccoltekin@sfs.uni-tuebingen.de) | https://github.com/huggingface/datasets/pull/1310 | [
"@lhoestq, can you please review this PR? ",
"> Awesome thank you !\r\n\r\nThanks for the small fixes @lhoestq ",
"@coltekin, we have added the data set that you created an article that says \"Turkish Attack Language Community in Social Media\", HuggingFace dataset update sprint for you. We added Sprint quickly... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1310",
"html_url": "https://github.com/huggingface/datasets/pull/1310",
"diff_url": "https://github.com/huggingface/datasets/pull/1310.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1310.patch",
"merged_at": "2020-12-09T16:02:06"
} | 1,310 | true |
Add SAMSum Corpus dataset | Did not spent much time writing README, might update later.
Copied description and some stuff from tensorflow_datasets
https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/summarization/samsum.py | https://github.com/huggingface/datasets/pull/1309 | [
"also to fix the check_code_quality CI you have to remove the imports of the unused `csv` and `os`",
"@lhoestq Thanks for the review! I have done what you asked, README is also updated. 🤗 \r\nThe CI fails because of the added dependency. I have never used circleCI before, so I am curious how will you solve that?... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1309",
"html_url": "https://github.com/huggingface/datasets/pull/1309",
"diff_url": "https://github.com/huggingface/datasets/pull/1309.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1309.patch",
"merged_at": "2020-12-14T10:20:55"
} | 1,309 | true |
Add Wiki Lingua Dataset | Hello,
This is my first PR.
I have added Wiki Lingua Dataset along with dataset card to the best of my knowledge.
There was one hiccup though. I was unable to create dummy data because the data is in pkl format.
From the document, I see that:
```At the moment it supports data files in the following format: txt, csv, tsv, jsonl, json, xml```
| https://github.com/huggingface/datasets/pull/1308 | [
"I am done adding the dataset. Requesting to review and advise.",
"looks like this PR has changes about many other files than the ones for WIki Lingua \r\n\r\nCan you create another branch and another PR please ?",
"Any reason to have english as the default config over the other languages ?",
"> looks like th... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1308",
"html_url": "https://github.com/huggingface/datasets/pull/1308",
"diff_url": "https://github.com/huggingface/datasets/pull/1308.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1308.patch",
"merged_at": null
} | 1,308 | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.