url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.15B
node_id
stringlengths
18
32
number
int64
1
3.78k
title
stringlengths
1
276
user
dict
labels
listlengths
0
3
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
listlengths
0
30
created_at
int64
1,587B
1,646B
updated_at
int64
1,587B
1,646B
closed_at
int64
1,587B
1,646B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/237
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/237/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/237/comments
https://api.github.com/repos/huggingface/datasets/issues/237/events
https://github.com/huggingface/datasets/issues/237
631,199,940
MDU6SXNzdWU2MzExOTk5NDA=
237
Can't download MultiNLI
{ "login": "patpizio", "id": 15801338, "node_id": "MDQ6VXNlcjE1ODAxMzM4", "avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patpizio", "html_url": "https://github.com/patpizio", "followers_url": "https://api.github.com/users/pat...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "You should use `load_dataset('glue', 'mnli')`", "Thanks! I thought I had to use the same code displayed in the live viewer:\r\n```python\r\n!pip install nlp\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('multi_nli', 'plain_text')\r\n```\r\nYour suggestion works, even if then I got a different issue (...
1,591,311,921,000
1,591,440,694,000
1,591,440,694,000
CONTRIBUTOR
null
When I try to download MultiNLI with ```python dataset = load_dataset('multi_nli') ``` I get this long error: ```python --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-13-3b11f6be4cb9> in <m...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/237/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/236
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/236/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/236/comments
https://api.github.com/repos/huggingface/datasets/issues/236/events
https://github.com/huggingface/datasets/pull/236
631,099,875
MDExOlB1bGxSZXF1ZXN0NDI4MDUwNzI4
236
CompGuessWhat?! dataset
{ "login": "aleSuglia", "id": 1479733, "node_id": "MDQ6VXNlcjE0Nzk3MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aleSuglia", "html_url": "https://github.com/aleSuglia", "followers_url": "https://api.github.com/users/al...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Hi @aleSuglia, thanks for this great PR. Indeed you can have both datasets in one file. You need to add a config class which will allows you to specify the different subdataset names and then you will be able to load them as follow.\r\nnlp.load_dataset(\"compguesswhat\", \"compguesswhat-gameplay\") \r\nnlp.load_d...
1,591,299,950,000
1,591,868,622,000
1,591,861,521,000
CONTRIBUTOR
null
Hello, Thanks for the amazing library that you put together. I'm Alessandro Suglia, the first author of CompGuessWhat?!, a recently released dataset for grounded language learning accepted to ACL 2020 ([https://compguesswhat.github.io](https://compguesswhat.github.io)). This pull-request adds the CompGuessWhat?! ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/236/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/236", "html_url": "https://github.com/huggingface/datasets/pull/236", "diff_url": "https://github.com/huggingface/datasets/pull/236.diff", "patch_url": "https://github.com/huggingface/datasets/pull/236.patch", "merged_at": 1591861521000 }
true
https://api.github.com/repos/huggingface/datasets/issues/235
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/235/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/235/comments
https://api.github.com/repos/huggingface/datasets/issues/235/events
https://github.com/huggingface/datasets/pull/235
630,952,297
MDExOlB1bGxSZXF1ZXN0NDI3OTM1MjQ0
235
Add experimental datasets
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "I think it would be nicer to not create a new folder `datasets_experimental` , but just put your datasets also into the folder `datasets` for the following reasons:\r\n\r\n- From my point of view, the datasets are not very different from the other datasets (assuming that we soon have C4, and the beam datasets) so ...
1,591,286,096,000
1,591,976,335,000
1,591,976,335,000
MEMBER
null
## Adding an *experimental datasets* folder After using the 🤗nlp library for some time, I find that while it makes it super easy to create new memory-mapped datasets with lots of cool utilities, a lot of what I want to do doesn't work well with the current `MockDownloader` based testing paradigm, making it hard to ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/235/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/235/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/235", "html_url": "https://github.com/huggingface/datasets/pull/235", "diff_url": "https://github.com/huggingface/datasets/pull/235.diff", "patch_url": "https://github.com/huggingface/datasets/pull/235.patch", "merged_at": 1591976335000 }
true
https://api.github.com/repos/huggingface/datasets/issues/234
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/234/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/234/comments
https://api.github.com/repos/huggingface/datasets/issues/234/events
https://github.com/huggingface/datasets/issues/234
630,534,427
MDU6SXNzdWU2MzA1MzQ0Mjc=
234
Huggingface NLP, Uploading custom dataset
{ "login": "Nouman97", "id": 42269506, "node_id": "MDQ6VXNlcjQyMjY5NTA2", "avatar_url": "https://avatars.githubusercontent.com/u/42269506?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nouman97", "html_url": "https://github.com/Nouman97", "followers_url": "https://api.github.com/users/Nou...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "What do you mean 'custom' ? You may want to elaborate on it when ask a question.\r\n\r\nAnyway, there are two things you may interested\r\n`nlp.Dataset.from_file` and `load_dataset(..., cache_dir=)`", "To load a dataset you need to have a script that defines the format of the examples, the splits and the way to ...
1,591,250,346,000
1,594,028,006,000
1,594,028,006,000
NONE
null
Hello, Does anyone know how we can call our custom dataset using the nlp.load command? Let's say that I have a dataset based on the same format as that of squad-v1.1, how am I supposed to load it using huggingface nlp. Thank you!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/234/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/234/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/233
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/233/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/233/comments
https://api.github.com/repos/huggingface/datasets/issues/233/events
https://github.com/huggingface/datasets/issues/233
630,432,132
MDU6SXNzdWU2MzA0MzIxMzI=
233
Fail to download c4 english corpus
{ "login": "donggyukimc", "id": 16605764, "node_id": "MDQ6VXNlcjE2NjA1NzY0", "avatar_url": "https://avatars.githubusercontent.com/u/16605764?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donggyukimc", "html_url": "https://github.com/donggyukimc", "followers_url": "https://api.github.com/...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Hello ! Thanks for noticing this bug, let me fix that.\r\n\r\nAlso for information, as specified in the changelog of the latest release, C4 currently needs to have a runtime for apache beam to work on. Apache beam is used to process this very big dataset and it can work on dataflow, spark, flink, apex, etc. You ca...
1,591,232,798,000
1,610,090,252,000
1,591,607,819,000
NONE
null
i run following code to download c4 English corpus. ``` dataset = nlp.load_dataset('c4', 'en', beam_runner='DirectRunner' , data_dir='/mypath') ``` and i met failure as follows ``` Downloading and preparing dataset c4/en (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/adam/....
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/233/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/233/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/232
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/232/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/232/comments
https://api.github.com/repos/huggingface/datasets/issues/232/events
https://github.com/huggingface/datasets/pull/232
630,029,568
MDExOlB1bGxSZXF1ZXN0NDI3MjI5NDcy
232
Nlp cli fix endpoints
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "LGTM 👍 " ]
1,591,193,439,000
1,591,606,978,000
1,591,606,977,000
MEMBER
null
With this PR users will be able to upload their own datasets and metrics. As mentioned in #181, I had to use the new endpoints and revert the use of dataclasses (just in case we have changes in the API in the future). We now distinguish commands for datasets and commands for metrics: ```bash nlp-cli upload_data...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/232/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/232/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/232", "html_url": "https://github.com/huggingface/datasets/pull/232", "diff_url": "https://github.com/huggingface/datasets/pull/232.diff", "patch_url": "https://github.com/huggingface/datasets/pull/232.patch", "merged_at": 1591606977000 }
true
https://api.github.com/repos/huggingface/datasets/issues/231
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/231/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/231/comments
https://api.github.com/repos/huggingface/datasets/issues/231/events
https://github.com/huggingface/datasets/pull/231
629,988,694
MDExOlB1bGxSZXF1ZXN0NDI3MTk3MTcz
231
Add .download to MockDownloadManager
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,591,190,400,000
1,591,194,356,000
1,591,194,355,000
MEMBER
null
One method from the DownloadManager was missing and some users couldn't run the tests because of that. @yjernite
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/231/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/231/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/231", "html_url": "https://github.com/huggingface/datasets/pull/231", "diff_url": "https://github.com/huggingface/datasets/pull/231.diff", "patch_url": "https://github.com/huggingface/datasets/pull/231.patch", "merged_at": 1591194354000 }
true
https://api.github.com/repos/huggingface/datasets/issues/230
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/230/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/230/comments
https://api.github.com/repos/huggingface/datasets/issues/230/events
https://github.com/huggingface/datasets/pull/230
629,983,684
MDExOlB1bGxSZXF1ZXN0NDI3MTkzMTQ0
230
Don't force to install apache beam for wikipedia dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,591,189,987,000
1,591,194,849,000
1,591,194,847,000
MEMBER
null
As pointed out in #227, we shouldn't force users to install apache beam if the processed dataset can be downloaded. I moved the imports of some datasets to avoid this problem
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/230/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/230/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/230", "html_url": "https://github.com/huggingface/datasets/pull/230", "diff_url": "https://github.com/huggingface/datasets/pull/230.diff", "patch_url": "https://github.com/huggingface/datasets/pull/230.patch", "merged_at": 1591194847000 }
true
https://api.github.com/repos/huggingface/datasets/issues/229
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/229/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/229/comments
https://api.github.com/repos/huggingface/datasets/issues/229/events
https://github.com/huggingface/datasets/pull/229
629,956,490
MDExOlB1bGxSZXF1ZXN0NDI3MTcxMzc5
229
Rename dataset_infos.json to dataset_info.json
{ "login": "aswin-giridhar", "id": 11817160, "node_id": "MDQ6VXNlcjExODE3MTYw", "avatar_url": "https://avatars.githubusercontent.com/u/11817160?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aswin-giridhar", "html_url": "https://github.com/aswin-giridhar", "followers_url": "https://api.gi...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "\r\nThis was actually the right name. `dataset_infos.json` is used to have the infos of all the dataset configurations.\r\n\r\nOn the other hand `dataset_info.json` (without 's') is a cache file with the info of one specific configuration.\r\n\r\nTo fix #228, we probably just have to clear and reload the nlp-viewe...
1,591,187,504,000
1,591,188,774,000
1,591,188,513,000
NONE
null
As the file required for the viewing in the live nlp viewer is named as dataset_info.json
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/229/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/229/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/229", "html_url": "https://github.com/huggingface/datasets/pull/229", "diff_url": "https://github.com/huggingface/datasets/pull/229.diff", "patch_url": "https://github.com/huggingface/datasets/pull/229.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/228
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/228/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/228/comments
https://api.github.com/repos/huggingface/datasets/issues/228/events
https://github.com/huggingface/datasets/issues/228
629,952,402
MDU6SXNzdWU2Mjk5NTI0MDI=
228
Not able to access the XNLI dataset
{ "login": "aswin-giridhar", "id": 11817160, "node_id": "MDQ6VXNlcjExODE3MTYw", "avatar_url": "https://avatars.githubusercontent.com/u/11817160?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aswin-giridhar", "html_url": "https://github.com/aswin-giridhar", "followers_url": "https://api.gi...
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
{ "login": "srush", "id": 35882, "node_id": "MDQ6VXNlcjM1ODgy", "avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srush", "html_url": "https://github.com/srush", "followers_url": "https://api.github.com/users/srush/followers", "f...
[ { "login": "srush", "id": 35882, "node_id": "MDQ6VXNlcjM1ODgy", "avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srush", "html_url": "https://github.com/srush", "followers_url": "https://api.github.com/users/sr...
null
[ "Added pull request to change the name of the file from dataset_infos.json to dataset_info.json", "Thanks for reporting this bug !\r\nAs it seems to be just a cache problem, I closed your PR.\r\nI think we might just need to clear and reload the `xnli` cache @srush ? ", "Update: The dataset_info.json error is g...
1,591,187,114,000
1,595,007,862,000
1,595,007,862,000
NONE
null
When I try to access the XNLI dataset, I get the following error. The option of plain_text get selected automatically and then I get the following error. ``` FileNotFoundError: [Errno 2] No such file or directory: '/home/sasha/.cache/huggingface/datasets/xnli/plain_text/1.0.0/dataset_info.json' Traceback: File "/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/228/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/228/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/227
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/227/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/227/comments
https://api.github.com/repos/huggingface/datasets/issues/227/events
https://github.com/huggingface/datasets/issues/227
629,845,704
MDU6SXNzdWU2Mjk4NDU3MDQ=
227
Should we still have to force to install apache_beam to download wikipedia ?
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.c...
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "Thanks for your message 😊 \r\nIndeed users shouldn't have to install those dependencies", "Got it, feel free to close this issue when you think it’s resolved.", "It should be good now :)" ]
1,591,176,800,000
1,591,197,941,000
1,591,197,941,000
CONTRIBUTOR
null
Hi, first thanks to @lhoestq 's revolutionary work, I successfully downloaded processed wikipedia according to the doc. 😍😍😍 But at the first try, it tell me to install `apache_beam` and `mwparserfromhell`, which I thought wouldn't be used according to #204 , it was kind of confusing me at that time. Maybe we s...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/227/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/226
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/226/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/226/comments
https://api.github.com/repos/huggingface/datasets/issues/226/events
https://github.com/huggingface/datasets/pull/226
628,344,520
MDExOlB1bGxSZXF1ZXN0NDI1OTA0MjEz
226
add BlendedSkillTalk dataset
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.githu...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Awesome :D" ]
1,591,008,885,000
1,591,195,043,000
1,591,195,042,000
CONTRIBUTOR
null
This PR add the BlendedSkillTalk dataset, which is used to fine tune the blenderbot.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/226/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/226", "html_url": "https://github.com/huggingface/datasets/pull/226", "diff_url": "https://github.com/huggingface/datasets/pull/226.diff", "patch_url": "https://github.com/huggingface/datasets/pull/226.patch", "merged_at": 1591195042000 }
true
https://api.github.com/repos/huggingface/datasets/issues/225
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/225/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/225/comments
https://api.github.com/repos/huggingface/datasets/issues/225/events
https://github.com/huggingface/datasets/issues/225
628,083,366
MDU6SXNzdWU2MjgwODMzNjY=
225
[ROUGE] Different scores with `files2rouge`
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/ast...
[ { "id": 2067400959, "node_id": "MDU6TGFiZWwyMDY3NDAwOTU5", "url": "https://api.github.com/repos/huggingface/datasets/labels/Metric%20discussion", "name": "Metric discussion", "color": "d722e8", "default": false, "description": "Discussions on the metrics" } ]
closed
false
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[ { "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api....
null
[ "@Colanim unfortunately there are different implementations of the ROUGE metric floating around online which yield different results, and we had to chose one for the package :) We ended up including the one from the google-research repository, which does minimal post-processing before computing the P/R/F scores. If...
1,590,972,636,000
1,591,198,038,000
1,591,198,038,000
NONE
null
It seems that the ROUGE score of `nlp` is lower than the one of `files2rouge`. Here is a self-contained notebook to reproduce both scores : https://colab.research.google.com/drive/14EyAXValB6UzKY9x4rs_T3pyL7alpw_F?usp=sharing --- `nlp` : (Only mid F-scores) >rouge1 0.33508031962733364 rouge2 0.145743337761...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/225/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/225/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/224
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/224/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/224/comments
https://api.github.com/repos/huggingface/datasets/issues/224/events
https://github.com/huggingface/datasets/issues/224
627,791,693
MDU6SXNzdWU2Mjc3OTE2OTM=
224
[Feature Request/Help] BLEURT model -> PyTorch
{ "login": "adamwlev", "id": 6889910, "node_id": "MDQ6VXNlcjY4ODk5MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6889910?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adamwlev", "html_url": "https://github.com/adamwlev", "followers_url": "https://api.github.com/users/adamw...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[ { "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api....
null
[ "Is there any update on this? \r\n\r\nThanks!", "Hitting this error when using bleurt with PyTorch ...\r\n\r\n```\r\nUnrecognizedFlagError: Unknown command line flag 'f'\r\n```\r\n... and I'm assuming because it was built for TF specifically. Is there a way to use this metric in PyTorch?", "We currently provid...
1,590,863,440,000
1,630,594,937,000
1,609,754,012,000
NONE
null
Hi, I am interested in porting google research's new BLEURT learned metric to PyTorch (because I wish to do something experimental with language generation and backpropping through BLEURT). I noticed that you guys don't have it yet so I am partly just asking if you plan to add it (@thomwolf said you want to do so on Tw...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/224/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/224/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/223
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/223/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/223/comments
https://api.github.com/repos/huggingface/datasets/issues/223/events
https://github.com/huggingface/datasets/issues/223
627,683,386
MDU6SXNzdWU2Mjc2ODMzODY=
223
[Feature request] Add FLUE dataset
{ "login": "lbourdois", "id": 58078086, "node_id": "MDQ6VXNlcjU4MDc4MDg2", "avatar_url": "https://avatars.githubusercontent.com/u/58078086?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lbourdois", "html_url": "https://github.com/lbourdois", "followers_url": "https://api.github.com/users/...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Hi @lbourdois, yes please share it with us", "@mariamabarham \r\nI put all the datasets on this drive: https://1drv.ms/u/s!Ao2Rcpiny7RFinDypq7w-LbXcsx9?e=iVsEDh\r\n\r\n\r\nSome information : \r\n• For FLUE, the quote used is\r\n\r\n> @misc{le2019flaubert,\r\n> title={FlauBERT: Unsupervised Language Model Pre...
1,590,828,735,000
1,607,002,773,000
1,607,002,773,000
NONE
null
Hi, I think it would be interesting to add the FLUE dataset for francophones or anyone wishing to work on French. In other requests, I read that you are already working on some datasets, and I was wondering if FLUE was planned. If it is not the case, I can provide each of the cleaned FLUE datasets (in the form...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/223/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/222
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/222/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/222/comments
https://api.github.com/repos/huggingface/datasets/issues/222/events
https://github.com/huggingface/datasets/issues/222
627,586,690
MDU6SXNzdWU2Mjc1ODY2OTA=
222
Colab Notebook breaks when downloading the squad dataset
{ "login": "carlos-aguayo", "id": 338917, "node_id": "MDQ6VXNlcjMzODkxNw==", "avatar_url": "https://avatars.githubusercontent.com/u/338917?v=4", "gravatar_id": "", "url": "https://api.github.com/users/carlos-aguayo", "html_url": "https://github.com/carlos-aguayo", "followers_url": "https://api.github.co...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "The notebook forces version 0.1.0. If I use the latest, things work, I'll run the whole notebook and create a PR.\r\n\r\nBut in the meantime, this issue gets fixed by changing:\r\n`!pip install nlp==0.1.0`\r\nto\r\n`!pip install nlp`", "It still breaks very near the end\r\n\r\n![image](https://user-images.github...
1,590,792,959,000
1,591,230,065,000
1,591,230,065,000
NONE
null
When I run the notebook in Colab https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb breaks when running this cell: ![image](https://user-images.githubusercontent.com/338917/83311709-ffd1b800-a1dd-11ea-8394-3a87df0d7f8b.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/222/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/221
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/221/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/221/comments
https://api.github.com/repos/huggingface/datasets/issues/221/events
https://github.com/huggingface/datasets/pull/221
627,300,648
MDExOlB1bGxSZXF1ZXN0NDI1MTI5OTc0
221
Fix tests/test_dataset_common.py
{ "login": "tayciryahmed", "id": 13635495, "node_id": "MDQ6VXNlcjEzNjM1NDk1", "avatar_url": "https://avatars.githubusercontent.com/u/13635495?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tayciryahmed", "html_url": "https://github.com/tayciryahmed", "followers_url": "https://api.github.c...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Thanks ! Good catch :)\r\n\r\nTo fix the CI you can do:\r\n1 - rebase from master\r\n2 - then run `make style` as specified in [CONTRIBUTING.md](https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md) ?" ]
1,590,761,535,000
1,591,014,042,000
1,590,764,543,000
CONTRIBUTOR
null
When I run the command `RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_arcd` while working on #220. I get the error ` unexpected keyword argument "'download_and_prepare_kwargs'"` at the level of `load_dataset`. Indeed, this [function](https://github.com/huggingface/nlp/blob/ma...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/221/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/221", "html_url": "https://github.com/huggingface/datasets/pull/221", "diff_url": "https://github.com/huggingface/datasets/pull/221.diff", "patch_url": "https://github.com/huggingface/datasets/pull/221.patch", "merged_at": 1590764543000 }
true
https://api.github.com/repos/huggingface/datasets/issues/220
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/220/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/220/comments
https://api.github.com/repos/huggingface/datasets/issues/220/events
https://github.com/huggingface/datasets/pull/220
627,280,683
MDExOlB1bGxSZXF1ZXN0NDI1MTEzMzEy
220
dataset_arcd
{ "login": "tayciryahmed", "id": 13635495, "node_id": "MDQ6VXNlcjEzNjM1NDk1", "avatar_url": "https://avatars.githubusercontent.com/u/13635495?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tayciryahmed", "html_url": "https://github.com/tayciryahmed", "followers_url": "https://api.github.c...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "you can rebase from master to fix the CI error :)", "Awesome !" ]
1,590,760,010,000
1,590,764,320,000
1,590,764,241,000
CONTRIBUTOR
null
Added Arabic Reading Comprehension Dataset (ARCD): https://arxiv.org/abs/1906.05394
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/220/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/220/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/220", "html_url": "https://github.com/huggingface/datasets/pull/220", "diff_url": "https://github.com/huggingface/datasets/pull/220.diff", "patch_url": "https://github.com/huggingface/datasets/pull/220.patch", "merged_at": 1590764241000 }
true
https://api.github.com/repos/huggingface/datasets/issues/219
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/219/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/219/comments
https://api.github.com/repos/huggingface/datasets/issues/219/events
https://github.com/huggingface/datasets/pull/219
627,235,893
MDExOlB1bGxSZXF1ZXN0NDI1MDc2NjQx
219
force mwparserfromhell as third party
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,590,755,597,000
1,590,759,013,000
1,590,759,012,000
MEMBER
null
This should fix your env because you had `mwparserfromhell ` as a first party for `isort` @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/219/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/219", "html_url": "https://github.com/huggingface/datasets/pull/219", "diff_url": "https://github.com/huggingface/datasets/pull/219.diff", "patch_url": "https://github.com/huggingface/datasets/pull/219.patch", "merged_at": 1590759012000 }
true
https://api.github.com/repos/huggingface/datasets/issues/218
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/218/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/218/comments
https://api.github.com/repos/huggingface/datasets/issues/218/events
https://github.com/huggingface/datasets/pull/218
627,173,407
MDExOlB1bGxSZXF1ZXN0NDI1MDI2NzEz
218
Add Natual Questions and C4 scripts
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,590,748,830,000
1,590,755,461,000
1,590,755,460,000
MEMBER
null
Scripts are ready ! However they are not processed nor directly available from gcp yet.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/218/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/218", "html_url": "https://github.com/huggingface/datasets/pull/218", "diff_url": "https://github.com/huggingface/datasets/pull/218.diff", "patch_url": "https://github.com/huggingface/datasets/pull/218.patch", "merged_at": 1590755460000 }
true
https://api.github.com/repos/huggingface/datasets/issues/217
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/217/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/217/comments
https://api.github.com/repos/huggingface/datasets/issues/217/events
https://github.com/huggingface/datasets/issues/217
627,128,403
MDU6SXNzdWU2MjcxMjg0MDM=
217
Multi-task dataset mixing
{ "login": "ghomasHudson", "id": 13795113, "node_id": "MDQ6VXNlcjEzNzk1MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghomasHudson", "html_url": "https://github.com/ghomasHudson", "followers_url": "https://api.github.c...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6...
open
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "I like this feature! I think the first question we should decide on is how to convert all datasets into the same format. In T5, the authors decided to format every dataset into a text-to-text format. If the dataset had \"multiple\" inputs like MNLI, the inputs were concatenated. So in MNLI the input:\r\n\r\n> - **...
1,590,744,146,000
1,603,701,993,000
null
CONTRIBUTOR
null
It seems like many of the best performing models on the GLUE benchmark make some use of multitask learning (simultaneous training on multiple tasks). The [T5 paper](https://arxiv.org/pdf/1910.10683.pdf) highlights multiple ways of mixing the tasks together during finetuning: - **Examples-proportional mixing** - sam...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/217/reactions", "total_count": 9, "+1": 9, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/217/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/216
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/216/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/216/comments
https://api.github.com/repos/huggingface/datasets/issues/216/events
https://github.com/huggingface/datasets/issues/216
626,896,890
MDU6SXNzdWU2MjY4OTY4OTA=
216
❓ How to get ROUGE-2 with the ROUGE metric ?
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/ast...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "ROUGE-1 and ROUGE-L shouldn't return the same thing. This is weird", "For the rouge2 metric you can do\r\n\r\n```python\r\nrouge = nlp.load_metric('rouge')\r\nwith open(\"pred.txt\") as p, open(\"ref.txt\") as g:\r\n for lp, lg in zip(p, g):\r\n rouge.add(lp, lg)\r\nscore = rouge.compute(rouge_types=[\...
1,590,709,652,000
1,590,969,875,000
1,590,969,875,000
NONE
null
I'm trying to use ROUGE metric, but I don't know how to get the ROUGE-2 metric. --- I compute scores with : ```python import nlp rouge = nlp.load_metric('rouge') with open("pred.txt") as p, open("ref.txt") as g: for lp, lg in zip(p, g): rouge.add([lp], [lg]) score = rouge.compute() ``` ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/216/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/215
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/215/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/215/comments
https://api.github.com/repos/huggingface/datasets/issues/215/events
https://github.com/huggingface/datasets/issues/215
626,867,879
MDU6SXNzdWU2MjY4Njc4Nzk=
215
NonMatchingSplitsSizesError when loading blog_authorship_corpus
{ "login": "cedricconol", "id": 52105365, "node_id": "MDQ6VXNlcjUyMTA1MzY1", "avatar_url": "https://avatars.githubusercontent.com/u/52105365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cedricconol", "html_url": "https://github.com/cedricconol", "followers_url": "https://api.github.com/...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "I just ran it on colab and got this\r\n```\r\n[{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812,\r\ndataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train',\r\nnum_bytes=611607465, num_examples=533285, dataset_name='blog_authorship_corpus')},\r\n{'expected': SplitInf...
1,590,706,519,000
1,644,498,345,000
1,644,498,345,000
NONE
null
Getting this error when i run `nlp.load_dataset('blog_authorship_corpus')`. ``` raise NonMatchingSplitsSizesError(str(bad_splits)) nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/215/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/215/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/214
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/214/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/214/comments
https://api.github.com/repos/huggingface/datasets/issues/214/events
https://github.com/huggingface/datasets/pull/214
626,641,549
MDExOlB1bGxSZXF1ZXN0NDI0NTk1NjIx
214
[arrow_dataset.py] add new filter function
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "I agree that a `.filter` method would be VERY useful and appreciated. I'm not a big fan of using `flatten_nested` as it completely breaks down the structure of the example and it may create bugs. Right now I think it may not work for nested structures. Maybe there's a simpler way that we've not figured out yet.", ...
1,590,682,900,000
1,590,752,609,000
1,590,751,940,000
MEMBER
null
The `.map()` function is super useful, but can IMO a bit tedious when filtering certain examples. I think, filtering out examples is also a very common operation people would like to perform on datasets. This PR is a proposal to add a `.filter()` function in the same spirit than the `.map()` function. Here is a ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/214/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/214", "html_url": "https://github.com/huggingface/datasets/pull/214", "diff_url": "https://github.com/huggingface/datasets/pull/214.diff", "patch_url": "https://github.com/huggingface/datasets/pull/214.patch", "merged_at": 1590751940000 }
true
https://api.github.com/repos/huggingface/datasets/issues/213
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/213/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/213/comments
https://api.github.com/repos/huggingface/datasets/issues/213/events
https://github.com/huggingface/datasets/pull/213
626,587,995
MDExOlB1bGxSZXF1ZXN0NDI0NTUxODE3
213
better message if missing beam options
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,590,678,417,000
1,590,745,877,000
1,590,745,876,000
MEMBER
null
WDYT @yjernite ? For example: ```python dataset = nlp.load_dataset('wikipedia', '20200501.aa') ``` Raises: ``` MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to ru...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/213/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/213/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/213", "html_url": "https://github.com/huggingface/datasets/pull/213", "diff_url": "https://github.com/huggingface/datasets/pull/213.diff", "patch_url": "https://github.com/huggingface/datasets/pull/213.patch", "merged_at": 1590745876000 }
true
https://api.github.com/repos/huggingface/datasets/issues/212
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/212/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/212/comments
https://api.github.com/repos/huggingface/datasets/issues/212/events
https://github.com/huggingface/datasets/pull/212
626,580,198
MDExOlB1bGxSZXF1ZXN0NDI0NTQ1NjAy
212
have 'add' and 'add_batch' for metrics
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,590,677,807,000
1,590,748,865,000
1,590,748,864,000
MEMBER
null
This should fix #116 Previously the `.add` method of metrics expected a batch of examples. Now `.add` expects one prediction/reference and `.add_batch` expects a batch. I think it is more coherent with the way the ArrowWriter works.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/212/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/212/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/212", "html_url": "https://github.com/huggingface/datasets/pull/212", "diff_url": "https://github.com/huggingface/datasets/pull/212.diff", "patch_url": "https://github.com/huggingface/datasets/pull/212.patch", "merged_at": 1590748864000 }
true
https://api.github.com/repos/huggingface/datasets/issues/211
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/211/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/211/comments
https://api.github.com/repos/huggingface/datasets/issues/211/events
https://github.com/huggingface/datasets/issues/211
626,565,994
MDU6SXNzdWU2MjY1NjU5OTQ=
211
[Arrow writer, Trivia_qa] Could not convert TagMe with type str: converting to null type
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomw...
[ { "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.gi...
null
[ "Here the full error trace:\r\n\r\n```\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-1-7aaf3f011358> in <module>\r\n 1 import nlp\r\n 2 ds = nlp.load_dataset(\"trivia_qa\", \"rc\", split=\"validation[:1%]\") # this might take 2.3 min to download but it's...
1,590,676,694,000
1,595,499,316,000
1,595,499,316,000
MEMBER
null
Running the following code ``` import nlp ds = nlp.load_dataset("trivia_qa", "rc", split="validation[:1%]") # this might take 2.3 min to download but it's cached afterwards... ds.map(lambda x: x, load_from_cache_file=False) ``` triggers a `ArrowInvalid: Could not convert TagMe with type str: converting to n...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/211/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/211/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/210
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/210/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/210/comments
https://api.github.com/repos/huggingface/datasets/issues/210/events
https://github.com/huggingface/datasets/pull/210
626,504,243
MDExOlB1bGxSZXF1ZXN0NDI0NDgyNDgz
210
fix xnli metric kwargs description
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,590,672,104,000
1,590,672,131,000
1,590,672,130,000
MEMBER
null
The text was wrong as noticed in #202
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/210/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/210", "html_url": "https://github.com/huggingface/datasets/pull/210", "diff_url": "https://github.com/huggingface/datasets/pull/210.diff", "patch_url": "https://github.com/huggingface/datasets/pull/210.patch", "merged_at": 1590672130000 }
true
https://api.github.com/repos/huggingface/datasets/issues/209
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/209/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/209/comments
https://api.github.com/repos/huggingface/datasets/issues/209/events
https://github.com/huggingface/datasets/pull/209
626,405,849
MDExOlB1bGxSZXF1ZXN0NDI0NDAwOTc4
209
Add a Google Drive exception for small files
{ "login": "airKlizz", "id": 25703835, "node_id": "MDQ6VXNlcjI1NzAzODM1", "avatar_url": "https://avatars.githubusercontent.com/u/25703835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/airKlizz", "html_url": "https://github.com/airKlizz", "followers_url": "https://api.github.com/users/air...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Can you run the style formatting tools to pass the code quality test?\r\n\r\nYou can find all the details in CONTRIBUTING.md: https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-contribute-to-nlp", "Nice ! ", "``make style`` done! Thanks for the approvals." ]
1,590,662,417,000
1,590,678,904,000
1,590,678,904,000
CONTRIBUTOR
null
I tried to use the ``nlp`` library to load personnal datasets. I mainly copy-paste the code for ``multi-news`` dataset because my files are stored on Google Drive. One of my dataset is small (< 25Mo) so it can be verified by Drive without asking the authorization to the user. This makes the download starts directly...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/209/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/209/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/209", "html_url": "https://github.com/huggingface/datasets/pull/209", "diff_url": "https://github.com/huggingface/datasets/pull/209.diff", "patch_url": "https://github.com/huggingface/datasets/pull/209.patch", "merged_at": 1590678904000 }
true
https://api.github.com/repos/huggingface/datasets/issues/208
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/208/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/208/comments
https://api.github.com/repos/huggingface/datasets/issues/208/events
https://github.com/huggingface/datasets/pull/208
626,398,519
MDExOlB1bGxSZXF1ZXN0NDI0Mzk0ODIx
208
[Dummy data] insert config name instead of config
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,590,661,699,000
1,590,670,081,000
1,590,670,080,000
MEMBER
null
Thanks @yjernite for letting me know. in the dummy data command the config name shuold be passed to the dataset builder and not the config itself. Also, @lhoestq fixed small import bug introduced by beam command I think.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/208/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/208", "html_url": "https://github.com/huggingface/datasets/pull/208", "diff_url": "https://github.com/huggingface/datasets/pull/208.diff", "patch_url": "https://github.com/huggingface/datasets/pull/208.patch", "merged_at": 1590670080000 }
true
https://api.github.com/repos/huggingface/datasets/issues/207
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/207/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/207/comments
https://api.github.com/repos/huggingface/datasets/issues/207/events
https://github.com/huggingface/datasets/issues/207
625,932,200
MDU6SXNzdWU2MjU5MzIyMDA=
207
Remove test set from NLP viewer
{ "login": "chrisdonahue", "id": 748399, "node_id": "MDQ6VXNlcjc0ODM5OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/748399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chrisdonahue", "html_url": "https://github.com/chrisdonahue", "followers_url": "https://api.github.com/u...
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "~is the viewer also open source?~\r\n[is a streamlit app!](https://docs.streamlit.io/en/latest/getting_started.html)", "Appears that [two thirds of those polled on Twitter](https://twitter.com/srush_nlp/status/1265734497632477185) are in favor of _some_ mechanism for averting eyeballs from the test data.", "We...
1,590,604,327,000
1,644,499,065,000
1,644,499,065,000
NONE
null
While the new [NLP viewer](https://huggingface.co/nlp/viewer/) is a great tool, I think it would be best to outright remove the option of looking at the test sets. At the very least, a warning should be displayed to users before showing the test set. Newcomers to the field might not be aware of best practices, and smal...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/207/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/207/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/206/comments
https://api.github.com/repos/huggingface/datasets/issues/206/events
https://github.com/huggingface/datasets/issues/206
625,842,989
MDU6SXNzdWU2MjU4NDI5ODk=
206
[Question] Combine 2 datasets which have the same columns
{ "login": "airKlizz", "id": 25703835, "node_id": "MDQ6VXNlcjI1NzAzODM1", "avatar_url": "https://avatars.githubusercontent.com/u/25703835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/airKlizz", "html_url": "https://github.com/airKlizz", "followers_url": "https://api.github.com/users/air...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "We are thinking about ways to combine datasets for T5 in #217, feel free to share your thoughts about this.", "Ok great! I will look at it. Thanks" ]
1,590,596,752,000
1,591,780,274,000
1,591,780,274,000
CONTRIBUTOR
null
Hi, I am using ``nlp`` to load personal datasets. I created summarization datasets in multi-languages based on wikinews. I have one dataset for english and one for german (french is getting to be ready as well). I want to keep these datasets independent because they need different pre-processing (add different task-...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/206/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/205
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/205/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/205/comments
https://api.github.com/repos/huggingface/datasets/issues/205/events
https://github.com/huggingface/datasets/pull/205
625,839,335
MDExOlB1bGxSZXF1ZXN0NDIzOTY2ODE1
205
Better arrow dataset iter
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,590,596,421,000
1,590,597,598,000
1,590,597,596,000
MEMBER
null
I tried to play around with `tf.data.Dataset.from_generator` and I found out that the `__iter__` that we have for `nlp.arrow_dataset.Dataset` ignores the format that has been set (torch or tensorflow). With these changes I should be able to come up with a `tf.data.Dataset` that uses lazy loading, as asked in #193.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/205/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/205", "html_url": "https://github.com/huggingface/datasets/pull/205", "diff_url": "https://github.com/huggingface/datasets/pull/205.diff", "patch_url": "https://github.com/huggingface/datasets/pull/205.patch", "merged_at": 1590597596000 }
true
https://api.github.com/repos/huggingface/datasets/issues/204
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/204/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/204/comments
https://api.github.com/repos/huggingface/datasets/issues/204/events
https://github.com/huggingface/datasets/pull/204
625,655,849
MDExOlB1bGxSZXF1ZXN0NDIzODE5MTQw
204
Add Dataflow support + Wikipedia + Wiki40b
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,590,582,769,000
1,590,653,435,000
1,590,653,434,000
MEMBER
null
# Add Dataflow support + Wikipedia + Wiki40b ## Support datasets processing with Apache Beam Some datasets are too big to be processed on a single machine, for example: wikipedia, wiki40b, etc. Apache Beam allows to process datasets on many execution engines like Dataflow, Spark, Flink, etc. To process such da...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/204/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/204", "html_url": "https://github.com/huggingface/datasets/pull/204", "diff_url": "https://github.com/huggingface/datasets/pull/204.diff", "patch_url": "https://github.com/huggingface/datasets/pull/204.patch", "merged_at": 1590653434000 }
true
https://api.github.com/repos/huggingface/datasets/issues/203
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/203/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/203/comments
https://api.github.com/repos/huggingface/datasets/issues/203/events
https://github.com/huggingface/datasets/pull/203
625,515,488
MDExOlB1bGxSZXF1ZXN0NDIzNzEyMTQ3
203
Raise an error if no config name for datasets like glue
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,590,570,238,000
1,590,597,639,000
1,590,597,638,000
MEMBER
null
Some datasets like glue (see #130) and scientific_papers (see #197) have many configs. For example for glue there are cola, sst2, mrpc etc. Currently if a user does `load_dataset('glue')`, then Cola is loaded by default and it can be confusing. Instead, we should raise an error to let the user know that he has to p...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/203/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/203", "html_url": "https://github.com/huggingface/datasets/pull/203", "diff_url": "https://github.com/huggingface/datasets/pull/203.diff", "patch_url": "https://github.com/huggingface/datasets/pull/203.patch", "merged_at": 1590597638000 }
true
https://api.github.com/repos/huggingface/datasets/issues/202
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/202/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/202/comments
https://api.github.com/repos/huggingface/datasets/issues/202/events
https://github.com/huggingface/datasets/issues/202
625,493,983
MDU6SXNzdWU2MjU0OTM5ODM=
202
Mistaken `_KWARGS_DESCRIPTION` for XNLI metric
{ "login": "phiyodr", "id": 33572125, "node_id": "MDQ6VXNlcjMzNTcyMTI1", "avatar_url": "https://avatars.githubusercontent.com/u/33572125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phiyodr", "html_url": "https://github.com/phiyodr", "followers_url": "https://api.github.com/users/phiyod...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Indeed, good catch ! thanks\r\nFixing it right now" ]
1,590,568,482,000
1,590,672,156,000
1,590,672,156,000
NONE
null
Hi! The [`_KWARGS_DESCRIPTION`](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/xnli/xnli.py#L45) for the XNLI metric uses `Args` and `Returns` text from [BLEU](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/bleu/bleu.py#L58) metric: ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/202/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/202/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/201
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/201/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/201/comments
https://api.github.com/repos/huggingface/datasets/issues/201/events
https://github.com/huggingface/datasets/pull/201
625,235,430
MDExOlB1bGxSZXF1ZXN0NDIzNDkzNTMw
201
Fix typo in README
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Amazing, @LysandreJik!", "Really did my best!" ]
1,590,531,501,000
1,590,536,431,000
1,590,534,056,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/201/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/201/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/201", "html_url": "https://github.com/huggingface/datasets/pull/201", "diff_url": "https://github.com/huggingface/datasets/pull/201.diff", "patch_url": "https://github.com/huggingface/datasets/pull/201.patch", "merged_at": 1590534056000 }
true
https://api.github.com/repos/huggingface/datasets/issues/200
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/200/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/200/comments
https://api.github.com/repos/huggingface/datasets/issues/200/events
https://github.com/huggingface/datasets/pull/200
625,226,638
MDExOlB1bGxSZXF1ZXN0NDIzNDg2NTM0
200
[ArrowWriter] Set schema at first write example
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Good point!\r\n\r\nI guess we could add this to `write_batch` as well (before using `self._schema` in the first line of this method)?" ]
1,590,530,388,000
1,590,570,474,000
1,590,570,473,000
MEMBER
null
Right now if the schema was not specified when instantiating `ArrowWriter`, then it could be set with the first `write_table` for example (it calls `self._build_writer()` to do so). I noticed that it was not done if the first example is added via `.write`, so I added it for coherence.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/200/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/200/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/200", "html_url": "https://github.com/huggingface/datasets/pull/200", "diff_url": "https://github.com/huggingface/datasets/pull/200.diff", "patch_url": "https://github.com/huggingface/datasets/pull/200.patch", "merged_at": 1590570473000 }
true
https://api.github.com/repos/huggingface/datasets/issues/199
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/199/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/199/comments
https://api.github.com/repos/huggingface/datasets/issues/199/events
https://github.com/huggingface/datasets/pull/199
625,217,440
MDExOlB1bGxSZXF1ZXN0NDIzNDc4ODIx
199
Fix GermEval 2014 dataset infos
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Hopefully. this also fixes the dataset view on https://huggingface.co/nlp/viewer/ :)", "Oh good catch ! This should fix it indeed" ]
1,590,529,304,000
1,590,529,824,000
1,590,529,824,000
CONTRIBUTOR
null
Hi, this PR just removes the `dataset_info.json` file and adds a newly generated `dataset_infos.json` file.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/199/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/199/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/199", "html_url": "https://github.com/huggingface/datasets/pull/199", "diff_url": "https://github.com/huggingface/datasets/pull/199.diff", "patch_url": "https://github.com/huggingface/datasets/pull/199.patch", "merged_at": 1590529824000 }
true
https://api.github.com/repos/huggingface/datasets/issues/198
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/198/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/198/comments
https://api.github.com/repos/huggingface/datasets/issues/198/events
https://github.com/huggingface/datasets/issues/198
625,200,627
MDU6SXNzdWU2MjUyMDA2Mjc=
198
Index outside of table length
{ "login": "casajarm", "id": 305717, "node_id": "MDQ6VXNlcjMwNTcxNw==", "avatar_url": "https://avatars.githubusercontent.com/u/305717?v=4", "gravatar_id": "", "url": "https://api.github.com/users/casajarm", "html_url": "https://github.com/casajarm", "followers_url": "https://api.github.com/users/casajar...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Sounds like something related to the nlp viewer @srush ", "Fixed. " ]
1,590,527,380,000
1,590,533,029,000
1,590,533,029,000
NONE
null
The offset input box warns of numbers larger than a limit (like 2000) but then the errors start at a smaller value than that limit (like 1955). > ValueError: Index (2000) outside of table length (2000). > Traceback: > File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _ru...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/198/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/198/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/197
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/197/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/197/comments
https://api.github.com/repos/huggingface/datasets/issues/197/events
https://github.com/huggingface/datasets/issues/197
624,966,904
MDU6SXNzdWU2MjQ5NjY5MDQ=
197
Scientific Papers only downloading Pubmed
{ "login": "antmarakis", "id": 17463361, "node_id": "MDQ6VXNlcjE3NDYzMzYx", "avatar_url": "https://avatars.githubusercontent.com/u/17463361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/antmarakis", "html_url": "https://github.com/antmarakis", "followers_url": "https://api.github.com/use...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Hi so there are indeed two configurations in the datasets as you can see [here](https://github.com/huggingface/nlp/blob/master/datasets/scientific_papers/scientific_papers.py#L81-L82).\r\n\r\nYou can load either one with:\r\n```python\r\ndataset = nlp.load_dataset('scientific_papers', 'pubmed')\r\ndataset = nlp.lo...
1,590,506,327,000
1,590,653,968,000
1,590,653,968,000
NONE
null
Hi! I have been playing around with this module, and I am a bit confused about the `scientific_papers` dataset. I thought that it would download two separate datasets, arxiv and pubmed. But when I run the following: ``` dataset = nlp.load_dataset('scientific_papers', data_dir='.', cache_dir='.') Downloading: 10...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/197/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/197/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/196
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/196/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/196/comments
https://api.github.com/repos/huggingface/datasets/issues/196/events
https://github.com/huggingface/datasets/pull/196
624,901,266
MDExOlB1bGxSZXF1ZXN0NDIzMjIwMjIw
196
Check invalid config name
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "I think that's not related to the config name but the filenames in the dummy data. Mostly it occurs with files downloaded from drive. In that case the dummy file name is extracted from the google drive link and it corresponds to what comes after `https://drive.google.com/`\r\n\r\n", "> I think that's not related...
1,590,501,171,000
1,590,527,096,000
1,590,527,095,000
MEMBER
null
As said in #194, we should raise an error if the config name has bad characters. Bad characters are those that are not allowed for directory names on windows.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/196/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/196/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/196", "html_url": "https://github.com/huggingface/datasets/pull/196", "diff_url": "https://github.com/huggingface/datasets/pull/196.diff", "patch_url": "https://github.com/huggingface/datasets/pull/196.patch", "merged_at": 1590527095000 }
true
https://api.github.com/repos/huggingface/datasets/issues/195
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/195/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/195/comments
https://api.github.com/repos/huggingface/datasets/issues/195/events
https://github.com/huggingface/datasets/pull/195
624,858,686
MDExOlB1bGxSZXF1ZXN0NDIzMTg1NTAy
195
[Dummy data command] add new case to command
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "@lhoestq - tiny change in the dummy data command, should be good to merge." ]
1,590,497,447,000
1,590,503,908,000
1,590,503,907,000
MEMBER
null
Qanta: #194 introduces a case that was not noticed before. This change in code helps community users to have an easier time creating the dummy data.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/195/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/195/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/195", "html_url": "https://github.com/huggingface/datasets/pull/195", "diff_url": "https://github.com/huggingface/datasets/pull/195.diff", "patch_url": "https://github.com/huggingface/datasets/pull/195.patch", "merged_at": 1590503907000 }
true
https://api.github.com/repos/huggingface/datasets/issues/194
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/194/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/194/comments
https://api.github.com/repos/huggingface/datasets/issues/194/events
https://github.com/huggingface/datasets/pull/194
624,854,897
MDExOlB1bGxSZXF1ZXN0NDIzMTgyNDM5
194
Add Dataset: Qanta
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "@lhoestq - the config name is rather special here: *E.g.* `mode=first,char_skip=25`. It includes `=` and `,` - will that be a problem for windows folders, you think? \r\n\r\nApart from that good to merge for me.", "It's ok to have `=` and `,`.\r\nWindows doesn't like things like `?`, `:`, `/` etc.\r\n\r\nI'll ad...
1,590,497,075,000
1,590,512,297,000
1,590,498,980,000
MEMBER
null
Fixes dummy data for #169 @EntilZha
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/194/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/194", "html_url": "https://github.com/huggingface/datasets/pull/194", "diff_url": "https://github.com/huggingface/datasets/pull/194.diff", "patch_url": "https://github.com/huggingface/datasets/pull/194.patch", "merged_at": 1590498980000 }
true
https://api.github.com/repos/huggingface/datasets/issues/193
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/193/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/193/comments
https://api.github.com/repos/huggingface/datasets/issues/193/events
https://github.com/huggingface/datasets/issues/193
624,655,558
MDU6SXNzdWU2MjQ2NTU1NTg=
193
[Tensorflow] Use something else than `from_tensor_slices()`
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/ast...
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "I guess we can use `tf.data.Dataset.from_generator` instead. I'll give it a try.", "Is `tf.data.Dataset.from_generator` working on TPU ?", "`from_generator` is not working on TPU, I met the following error :\r\n\r\n```\r\nFile \"/usr/local/lib/python3.6/contextlib.py\", line 88, in __exit__\r\n next(self.ge...
1,590,477,554,000
1,603,812,491,000
1,603,812,491,000
NONE
null
In the example notebook, the TF Dataset is built using `from_tensor_slices()` : ```python columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions'] train_tf_dataset.set_format(type='tensorflow', columns=columns) features = {x: train_tf_dataset[x] for x in columns[:3]} label...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/193/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/192
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/192/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/192/comments
https://api.github.com/repos/huggingface/datasets/issues/192/events
https://github.com/huggingface/datasets/issues/192
624,397,592
MDU6SXNzdWU2MjQzOTc1OTI=
192
[Question] Create Apache Arrow dataset from raw text file
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "We store every dataset in the Arrow format. This is convenient as it supports nested types and memory mapping. If you are curious feel free to check the [pyarrow documentation](https://arrow.apache.org/docs/python/)\r\n\r\nYou can use this library to load your covid papers by creating a dataset script. You can fin...
1,590,424,967,000
1,639,791,934,000
1,603,812,022,000
NONE
null
Hi guys, I have gathered and preprocessed about 2GB of COVID papers from CORD dataset @ Kggle. I have seen you have a text dataset as "Crime and punishment" in Apache arrow format. Do you have any script to do it from a raw txt file (preprocessed as for BERT like) or any guide? Is the worth of send it to you and add i...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/192/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/192/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/191
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/191/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/191/comments
https://api.github.com/repos/huggingface/datasets/issues/191/events
https://github.com/huggingface/datasets/pull/191
624,394,936
MDExOlB1bGxSZXF1ZXN0NDIyODI3MDMy
191
[Squad es] add dataset_infos
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,590,424,552,000
1,590,424,799,000
1,590,424,798,000
MEMBER
null
@mariamabarham - was still about to upload this. Should have waited with my comment a bit more :D
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/191/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/191/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/191", "html_url": "https://github.com/huggingface/datasets/pull/191", "diff_url": "https://github.com/huggingface/datasets/pull/191.diff", "patch_url": "https://github.com/huggingface/datasets/pull/191.patch", "merged_at": 1590424798000 }
true
https://api.github.com/repos/huggingface/datasets/issues/190
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/190/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/190/comments
https://api.github.com/repos/huggingface/datasets/issues/190/events
https://github.com/huggingface/datasets/pull/190
624,124,600
MDExOlB1bGxSZXF1ZXN0NDIyNjA4NzAw
190
add squad Spanish v1 and v2
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.githu...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Nice ! :) \r\nCan we group them into one dataset with two versions, instead of having two datasets ?", "Yes sure, I can use the version as config name", "@lhoestq can you check? I grouped them", "Awesome :) feel free to merge after fixing the test in the CI", "@mariamabarham - feel free to merge when you'r...
1,590,394,120,000
1,590,424,126,000
1,590,424,125,000
CONTRIBUTOR
null
This PR add the Spanish Squad versions 1 and 2 datasets. Fixes #164
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/190/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/190/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/190", "html_url": "https://github.com/huggingface/datasets/pull/190", "diff_url": "https://github.com/huggingface/datasets/pull/190.diff", "patch_url": "https://github.com/huggingface/datasets/pull/190.patch", "merged_at": 1590424125000 }
true
https://api.github.com/repos/huggingface/datasets/issues/189
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/189/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/189/comments
https://api.github.com/repos/huggingface/datasets/issues/189/events
https://github.com/huggingface/datasets/issues/189
624,048,881
MDU6SXNzdWU2MjQwNDg4ODE=
189
[Question] BERT-style multiple choice formatting
{ "login": "sarahwie", "id": 8027676, "node_id": "MDQ6VXNlcjgwMjc2NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarahwie", "html_url": "https://github.com/sarahwie", "followers_url": "https://api.github.com/users/sarah...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Hi @sarahwie, can you details this a little more?\r\n\r\nI'm not sure I understand what you refer to and what you mean when you say \"Previously, this was done by passing a list of InputFeatures to the dataloader instead of a list of InputFeature\"", "I think I've resolved it. For others' reference: to convert f...
1,590,383,465,000
1,590,431,908,000
1,590,431,908,000
NONE
null
Hello, I am wondering what the equivalent formatting of a dataset should be to allow for multiple-choice answering prediction, BERT-style. Previously, this was done by passing a list of `InputFeatures` to the dataloader instead of a list of `InputFeature`, where `InputFeatures` contained lists of length equal to the nu...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/189/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/189/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/188
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/188/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/188/comments
https://api.github.com/repos/huggingface/datasets/issues/188/events
https://github.com/huggingface/datasets/issues/188
623,890,430
MDU6SXNzdWU2MjM4OTA0MzA=
188
When will the remaining math_dataset modules be added as dataset objects
{ "login": "tylerroost", "id": 31251196, "node_id": "MDQ6VXNlcjMxMjUxMTk2", "avatar_url": "https://avatars.githubusercontent.com/u/31251196?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tylerroost", "html_url": "https://github.com/tylerroost", "followers_url": "https://api.github.com/use...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "On a similar note it would be nice to differentiate between train-easy, train-medium, and train-hard", "Hi @tylerroost, we don't have a timeline for this at the moment.\r\nIf you want to give it a look we would be happy to review a PR on it.\r\nAlso, the library is one week old so everything is quite barebones, ...
1,590,335,212,000
1,590,346,428,000
1,590,346,428,000
NONE
null
Currently only the algebra_linear_1d is supported. Is there a timeline for making the other modules supported. If no timeline is established, how can I help?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/188/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/187
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/187/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/187/comments
https://api.github.com/repos/huggingface/datasets/issues/187/events
https://github.com/huggingface/datasets/issues/187
623,627,800
MDU6SXNzdWU2MjM2Mjc4MDA=
187
[Question] How to load wikipedia ? Beam runner ?
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.c...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "I have seen that somebody is hard working on easierly loadable wikipedia. #129 \r\nMaybe I should wait a few days for that version ?", "Yes we (well @lhoestq) are very actively working on this." ]
1,590,229,132,000
1,590,365,522,000
1,590,365,522,000
CONTRIBUTOR
null
When `nlp.load_dataset('wikipedia')`, I got * `WARNING:nlp.builder:Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided. Please pass a nlp.DownloadConfig(beam_runner=...) object to the builder.download_and_prepare(download_config=...) method. Default values will be ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/187/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/187/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/186
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/186/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/186/comments
https://api.github.com/repos/huggingface/datasets/issues/186/events
https://github.com/huggingface/datasets/issues/186
623,595,180
MDU6SXNzdWU2MjM1OTUxODA=
186
Weird-ish: Not creating unique caches for different phases
{ "login": "zphang", "id": 1668462, "node_id": "MDQ6VXNlcjE2Njg0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zphang", "html_url": "https://github.com/zphang", "followers_url": "https://api.github.com/users/zphang/foll...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Looks like a duplicate of #120.\r\nThis is already fixed on master. We'll do a new release on pypi soon", "Good catch, it looks fixed.\r\n" ]
1,590,216,058,000
1,590,265,338,000
1,590,265,337,000
NONE
null
Sample code: ```python import nlp dataset = nlp.load_dataset('boolq') def func1(x): return x def func2(x): return None train_output = dataset["train"].map(func1) valid_output = dataset["validation"].map(func1) print() print(len(train_output), len(valid_output)) # Output: 9427 9427 ``` Th...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/186/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/186/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/185
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/185/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/185/comments
https://api.github.com/repos/huggingface/datasets/issues/185/events
https://github.com/huggingface/datasets/pull/185
623,172,484
MDExOlB1bGxSZXF1ZXN0NDIxODkxNjY2
185
[Commands] In-detail instructions to create dummy data folder
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "awesome !" ]
1,590,150,385,000
1,590,156,395,000
1,590,156,394,000
MEMBER
null
### Dummy data command This PR adds a new command `python nlp-cli dummy_data <path_to_dataset_folder>` that gives in-detail instructions on how to add the dummy data files. It would be great if you can try it out by moving the current dummy_data folder of any dataset in `./datasets` with `mv datasets/<dataset_s...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/185/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/185/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/185", "html_url": "https://github.com/huggingface/datasets/pull/185", "diff_url": "https://github.com/huggingface/datasets/pull/185.diff", "patch_url": "https://github.com/huggingface/datasets/pull/185.patch", "merged_at": 1590156394000 }
true
https://api.github.com/repos/huggingface/datasets/issues/184
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/184/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/184/comments
https://api.github.com/repos/huggingface/datasets/issues/184/events
https://github.com/huggingface/datasets/pull/184
623,120,929
MDExOlB1bGxSZXF1ZXN0NDIxODQ5MTQ3
184
Use IndexError instead of ValueError when index out of range
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.c...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,590,144,222,000
1,590,654,678,000
1,590,654,678,000
CONTRIBUTOR
null
**`default __iter__ needs IndexError`**. When I want to create a wrapper of arrow dataset to adapt to fastai, I don't know how to initialize it, so I didn't use inheritance but use object composition. I wrote sth like this. ``` clas HF_dataset(): def __init__(self, arrow_dataset): self.dset = arrow_datas...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/184/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/184", "html_url": "https://github.com/huggingface/datasets/pull/184", "diff_url": "https://github.com/huggingface/datasets/pull/184.diff", "patch_url": "https://github.com/huggingface/datasets/pull/184.patch", "merged_at": 1590654678000 }
true
https://api.github.com/repos/huggingface/datasets/issues/183
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/183/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/183/comments
https://api.github.com/repos/huggingface/datasets/issues/183/events
https://github.com/huggingface/datasets/issues/183
623,054,270
MDU6SXNzdWU2MjMwNTQyNzA=
183
[Bug] labels of glue/ax are all -1
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.c...
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "This is the test set given by the Glue benchmark. The labels are not provided, and therefore set to -1.", "Ah, yeah. Why it didn’t occur to me. 😂\nThank you for your comment." ]
1,590,137,016,000
1,590,185,645,000
1,590,185,645,000
CONTRIBUTOR
null
``` ax = nlp.load_dataset('glue', 'ax') for i in range(30): print(ax['test'][i]['label'], end=', ') ``` ``` -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/183/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/182
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/182/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/182/comments
https://api.github.com/repos/huggingface/datasets/issues/182/events
https://github.com/huggingface/datasets/pull/182
622,646,770
MDExOlB1bGxSZXF1ZXN0NDIxNDcxMjg4
182
Update newsroom.py
{ "login": "yoavartzi", "id": 3289873, "node_id": "MDQ6VXNlcjMyODk4NzM=", "avatar_url": "https://avatars.githubusercontent.com/u/3289873?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yoavartzi", "html_url": "https://github.com/yoavartzi", "followers_url": "https://api.github.com/users/yo...
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "follo...
null
[]
1,590,080,863,000
1,590,165,503,000
1,590,165,503,000
CONTRIBUTOR
null
Updated the URL for Newsroom download so it's more robust to future changes.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/182/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/182/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/182", "html_url": "https://github.com/huggingface/datasets/pull/182", "diff_url": "https://github.com/huggingface/datasets/pull/182.diff", "patch_url": "https://github.com/huggingface/datasets/pull/182.patch", "merged_at": 1590165503000 }
true
https://api.github.com/repos/huggingface/datasets/issues/181
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/181/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/181/comments
https://api.github.com/repos/huggingface/datasets/issues/181/events
https://github.com/huggingface/datasets/issues/181
622,634,420
MDU6SXNzdWU2MjI2MzQ0MjA=
181
Cannot upload my own dataset
{ "login": "korakot", "id": 3155646, "node_id": "MDQ6VXNlcjMxNTU2NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/3155646?v=4", "gravatar_id": "", "url": "https://api.github.com/users/korakot", "html_url": "https://github.com/korakot", "followers_url": "https://api.github.com/users/korakot/...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "It's my misunderstanding. I cannot just upload a csv. I need to write a dataset loading script too.", "I now try with the sample `datasets/csv` folder. \r\n\r\n nlp-cli upload csv\r\n\r\nThe error is still the same\r\n\r\n```\r\n2020-05-21 17:20:56.394659: I tensorflow/stream_executor/platform/default/dso_loa...
1,590,079,552,000
1,592,518,482,000
1,592,518,482,000
NONE
null
I look into `nlp-cli` and `user.py` to learn how to upload my own data. It is supposed to work like this - Register to get username, password at huggingface.co - `nlp-cli login` and type username, passworld - I have a single file to upload at `./ttc/ttc_freq_extra.csv` - `nlp-cli upload ttc/ttc_freq_extra.csv` ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/181/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/181/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/180
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/180/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/180/comments
https://api.github.com/repos/huggingface/datasets/issues/180/events
https://github.com/huggingface/datasets/pull/180
622,556,861
MDExOlB1bGxSZXF1ZXN0NDIxMzk5Nzg2
180
Add hall of fame
{ "login": "clmnt", "id": 821155, "node_id": "MDQ6VXNlcjgyMTE1NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/821155?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clmnt", "html_url": "https://github.com/clmnt", "followers_url": "https://api.github.com/users/clmnt/followers"...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,590,072,828,000
1,590,165,316,000
1,590,165,314,000
MEMBER
null
powered by https://github.com/sourcerer-io/hall-of-fame
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/180/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/180/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/180", "html_url": "https://github.com/huggingface/datasets/pull/180", "diff_url": "https://github.com/huggingface/datasets/pull/180.diff", "patch_url": "https://github.com/huggingface/datasets/pull/180.patch", "merged_at": 1590165314000 }
true
https://api.github.com/repos/huggingface/datasets/issues/179
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/179/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/179/comments
https://api.github.com/repos/huggingface/datasets/issues/179/events
https://github.com/huggingface/datasets/issues/179
622,525,410
MDU6SXNzdWU2MjI1MjU0MTA=
179
[Feature request] separate split name and split instructions
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "If your dataset is a collection of sub-datasets, you should probably consider having one config per sub-dataset. For example for Glue, we have sst2, mnli etc.\r\nIf you want to have multiple train sets (for example one per stage). The easiest solution would be to name them `nlp.Split(\"train_stage1\")`, `nlp.Split...
1,590,070,251,000
1,590,154,268,000
1,590,154,267,000
MEMBER
null
Currently, the name of an nlp.NamedSplit is parsed in arrow_reader.py and used as the instruction. This makes it impossible to have several training sets, which can occur when: - A dataset corresponds to a collection of sub-datasets - A dataset was built in stages, adding new examples at each stage Would it be ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/179/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/179/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/178
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/178/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/178/comments
https://api.github.com/repos/huggingface/datasets/issues/178/events
https://github.com/huggingface/datasets/pull/178
621,979,849
MDExOlB1bGxSZXF1ZXN0NDIwOTMyMDI5
178
[Manual data] improve error message for manual data in general
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,589,998,245,000
1,589,998,732,000
1,589,998,730,000
MEMBER
null
`nlp.load("xsum")` now leads to the following error message: ![Screenshot from 2020-05-20 20-05-28](https://user-images.githubusercontent.com/23423619/82481825-3587ea00-9ad6-11ea-9ca2-5794252c6ac7.png) I guess the manual download instructions for `xsum` can also be improved.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/178/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/178/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/178", "html_url": "https://github.com/huggingface/datasets/pull/178", "diff_url": "https://github.com/huggingface/datasets/pull/178.diff", "patch_url": "https://github.com/huggingface/datasets/pull/178.patch", "merged_at": 1589998730000 }
true
https://api.github.com/repos/huggingface/datasets/issues/177
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/177/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/177/comments
https://api.github.com/repos/huggingface/datasets/issues/177/events
https://github.com/huggingface/datasets/pull/177
621,975,368
MDExOlB1bGxSZXF1ZXN0NDIwOTI4MzE0
177
Xsum manual download instruction
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.githu...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,589,997,761,000
1,589,998,610,000
1,589,998,609,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/177/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/177/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/177", "html_url": "https://github.com/huggingface/datasets/pull/177", "diff_url": "https://github.com/huggingface/datasets/pull/177.diff", "patch_url": "https://github.com/huggingface/datasets/pull/177.patch", "merged_at": 1589998609000 }
true
https://api.github.com/repos/huggingface/datasets/issues/176
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/176/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/176/comments
https://api.github.com/repos/huggingface/datasets/issues/176/events
https://github.com/huggingface/datasets/pull/176
621,934,638
MDExOlB1bGxSZXF1ZXN0NDIwODkzNDky
176
[Tests] Refactor MockDownloadManager
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,589,994,456,000
1,589,998,639,000
1,589,998,638,000
MEMBER
null
Clean mock download manager class. The print function was not of much help I think. We should think about adding a command that creates the dummy folder structure for the user.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/176/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/176/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/176", "html_url": "https://github.com/huggingface/datasets/pull/176", "diff_url": "https://github.com/huggingface/datasets/pull/176.diff", "patch_url": "https://github.com/huggingface/datasets/pull/176.patch", "merged_at": 1589998638000 }
true
https://api.github.com/repos/huggingface/datasets/issues/175
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/175/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/175/comments
https://api.github.com/repos/huggingface/datasets/issues/175/events
https://github.com/huggingface/datasets/issues/175
621,929,428
MDU6SXNzdWU2MjE5Mjk0Mjg=
175
[Manual data dir] Error message: nlp.load_dataset('xsum') -> TypeError
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/ss...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,589,994,032,000
1,589,998,730,000
1,589,998,730,000
CONTRIBUTOR
null
v 0.1.0 from pip ```python import nlp xsum = nlp.load_dataset('xsum') ``` Issue is `dl_manager.manual_dir`is `None` ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-42-8a32f06...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/175/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/174
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/174/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/174/comments
https://api.github.com/repos/huggingface/datasets/issues/174/events
https://github.com/huggingface/datasets/issues/174
621,928,403
MDU6SXNzdWU2MjE5Mjg0MDM=
174
nlp.load_dataset('xsum') -> TypeError
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/ss...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,589,993,949,000
1,589,996,626,000
1,589,996,626,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/174/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/174/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/173
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/173/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/173/comments
https://api.github.com/repos/huggingface/datasets/issues/173/events
https://github.com/huggingface/datasets/pull/173
621,764,932
MDExOlB1bGxSZXF1ZXN0NDIwNzUyNzQy
173
Rm extracted test dirs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Thanks for cleaning up the extracted dummy data folders! Instead of changing the file_utils we could also just put these folders under `.gitignore` (or maybe already done?).", "Awesome! I guess you might have to add the changes for the MockDLManager now in a different file though because of my last PR - sorry!" ...
1,589,981,448,000
1,590,165,276,000
1,590,165,275,000
MEMBER
null
All the dummy data used for tests were duplicated. For each dataset, we had one zip file but also its extracted directory. I removed all these directories Furthermore instead of extracting next to the dummy_data.zip file, we extract in the temp `cached_dir` used for tests, so that all the extracted directories get r...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/173/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/173/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/173", "html_url": "https://github.com/huggingface/datasets/pull/173", "diff_url": "https://github.com/huggingface/datasets/pull/173.diff", "patch_url": "https://github.com/huggingface/datasets/pull/173.patch", "merged_at": 1590165275000 }
true
https://api.github.com/repos/huggingface/datasets/issues/172
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/172/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/172/comments
https://api.github.com/repos/huggingface/datasets/issues/172/events
https://github.com/huggingface/datasets/issues/172
621,377,386
MDU6SXNzdWU2MjEzNzczODY=
172
Clone not working on Windows environment
{ "login": "codehunk628", "id": 51091425, "node_id": "MDQ6VXNlcjUxMDkxNDI1", "avatar_url": "https://avatars.githubusercontent.com/u/51091425?v=4", "gravatar_id": "", "url": "https://api.github.com/users/codehunk628", "html_url": "https://github.com/codehunk628", "followers_url": "https://api.github.com/...
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "Should be fixed on master now :)", "Thanks @lhoestq 👍 Now I can uninstall WSL and get back to work with windows.🙂" ]
1,589,935,514,000
1,590,238,153,000
1,590,233,272,000
CONTRIBUTOR
null
Cloning in a windows environment is not working because of use of special character '?' in folder name .. Please consider changing the folder name .... Reference to folder - nlp/datasets/cnn_dailymail/dummy/3.0.0/3.0.0/dummy_data-zip-extracted/dummy_data/uc?export=download&id=0BwmD_VLjROrfM1BxdkxVaTY2bWs/dailymail/s...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/172/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/172/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/171
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/171/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/171/comments
https://api.github.com/repos/huggingface/datasets/issues/171/events
https://github.com/huggingface/datasets/pull/171
621,199,128
MDExOlB1bGxSZXF1ZXN0NDIwMjk0ODM0
171
fix squad metric format
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "One thing for SQuAD is that I wanted to be able to use the SQuAD dataset directly in the metrics and I'm not sure it will be possible with this format.\r\n\r\n(maybe it's not really possible in general though)", "This is kinda related to one thing I had in mind which is that we may want to be able to dump our mo...
1,589,913,456,000
1,590,154,610,000
1,590,154,608,000
MEMBER
null
The format of the squad metric was wrong. This should fix #143 I tested with ```python3 predictions = [ {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'} ] references = [ {'answers': [{'text': 'Denver Broncos'}], 'id': '56be4db0acb8001400a502ec'} ] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/171/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/171/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/171", "html_url": "https://github.com/huggingface/datasets/pull/171", "diff_url": "https://github.com/huggingface/datasets/pull/171.diff", "patch_url": "https://github.com/huggingface/datasets/pull/171.patch", "merged_at": 1590154608000 }
true
https://api.github.com/repos/huggingface/datasets/issues/170
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/170/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/170/comments
https://api.github.com/repos/huggingface/datasets/issues/170/events
https://github.com/huggingface/datasets/pull/170
621,119,747
MDExOlB1bGxSZXF1ZXN0NDIwMjMwMDIx
170
Rename anli dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,589,905,617,000
1,589,977,389,000
1,589,977,388,000
MEMBER
null
What we have now as the `anli` dataset is actually the αNLI dataset from the ART challenge dataset. This name is confusing because `anli` is also the name of adversarial NLI (see [https://github.com/facebookresearch/anli](https://github.com/facebookresearch/anli)). I renamed the current `anli` dataset by `art`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/170/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/170/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/170", "html_url": "https://github.com/huggingface/datasets/pull/170", "diff_url": "https://github.com/huggingface/datasets/pull/170.diff", "patch_url": "https://github.com/huggingface/datasets/pull/170.patch", "merged_at": 1589977387000 }
true
https://api.github.com/repos/huggingface/datasets/issues/169
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/169/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/169/comments
https://api.github.com/repos/huggingface/datasets/issues/169/events
https://github.com/huggingface/datasets/pull/169
621,099,682
MDExOlB1bGxSZXF1ZXN0NDIwMjE1NDkw
169
Adding Qanta (Quizbowl) Dataset
{ "login": "EntilZha", "id": 1382460, "node_id": "MDQ6VXNlcjEzODI0NjA=", "avatar_url": "https://avatars.githubusercontent.com/u/1382460?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EntilZha", "html_url": "https://github.com/EntilZha", "followers_url": "https://api.github.com/users/Entil...
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "follo...
null
[ "Hi @EntilZha - sorry for waiting so long until taking action here. We created a new command and a new recipe of how to add dummy_data. Can you maybe rebase to `master` as explained in 7. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-contribute-to-nlp and check that your dummy data is cor...
1,589,904,181,000
1,590,497,551,000
1,590,497,551,000
CONTRIBUTOR
null
This PR adds the qanta question answering datasets from [Quizbowl: The Case for Incremental Question Answering](https://arxiv.org/abs/1904.04792) and [Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples](https://www.aclweb.org/anthology/Q19-1029/) (adversarial fold) This part...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/169/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/169/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/169", "html_url": "https://github.com/huggingface/datasets/pull/169", "diff_url": "https://github.com/huggingface/datasets/pull/169.diff", "patch_url": "https://github.com/huggingface/datasets/pull/169.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/168
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/168/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/168/comments
https://api.github.com/repos/huggingface/datasets/issues/168/events
https://github.com/huggingface/datasets/issues/168
620,959,819
MDU6SXNzdWU2MjA5NTk4MTk=
168
Loading 'wikitext' dataset fails
{ "login": "itay1itzhak", "id": 25987633, "node_id": "MDQ6VXNlcjI1OTg3NjMz", "avatar_url": "https://avatars.githubusercontent.com/u/25987633?v=4", "gravatar_id": "", "url": "https://api.github.com/users/itay1itzhak", "html_url": "https://github.com/itay1itzhak", "followers_url": "https://api.github.com/...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Hi, make sure you have a recent version of pyarrow.\r\n\r\nAre you using it in Google Colab? In this case, this error is probably the same as #128", "Thanks!\r\n\r\nYes I'm using Google Colab, it seems like a duplicate then.", "Closing as it is a duplicate", "Hi,\r\nThe squad bug seems to be fixed, but the l...
1,589,893,469,000
1,590,529,612,000
1,590,529,612,000
NONE
null
Loading the 'wikitext' dataset fails with Attribute error: Code to reproduce (From example notebook): import nlp wikitext_dataset = nlp.load_dataset('wikitext') Error: --------------------------------------------------------------------------- AttributeError Traceback (most rece...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/168/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/168/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/167
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/167/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/167/comments
https://api.github.com/repos/huggingface/datasets/issues/167/events
https://github.com/huggingface/datasets/pull/167
620,908,786
MDExOlB1bGxSZXF1ZXN0NDIwMDY0NDMw
167
[Tests] refactor tests
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Nice !" ]
1,589,888,612,000
1,589,905,032,000
1,589,905,030,000
MEMBER
null
This PR separates AWS and Local tests to remove these ugly statements in the script: ```python if "/" not in dataset_name: logging.info("Skip {} because it is a canonical dataset") return ``` To run a `aws` test, one should now run the following command: ```python pytest -s...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/167/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/167", "html_url": "https://github.com/huggingface/datasets/pull/167", "diff_url": "https://github.com/huggingface/datasets/pull/167.diff", "patch_url": "https://github.com/huggingface/datasets/pull/167.patch", "merged_at": 1589905030000 }
true
https://api.github.com/repos/huggingface/datasets/issues/166
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/166/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/166/comments
https://api.github.com/repos/huggingface/datasets/issues/166/events
https://github.com/huggingface/datasets/issues/166
620,850,218
MDU6SXNzdWU2MjA4NTAyMTg=
166
Add a method to shuffle a dataset
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomw...
[ { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "+1 for the naming convention\r\n\r\nAbout the `shuffle` method, from my understanding it should be done in `Dataloader` (better separation between dataset processing - usage)", "+1 for shuffle in `Dataloader`. \r\nSome `Dataloader` just store idxs of dataset and just shuffle those idxs, which might(?) be faster ...
1,589,882,926,000
1,592,924,853,000
1,592,924,852,000
MEMBER
null
Could maybe be a `dataset.shuffle(generator=None, seed=None)` signature method. Also, we could maybe have a clear indication of which method modify in-place and which methods return/cache a modified dataset. I kinda like torch conversion of having an underscore suffix for all the methods which modify a dataset in-pl...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/166/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/166/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/165
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/165/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/165/comments
https://api.github.com/repos/huggingface/datasets/issues/165/events
https://github.com/huggingface/datasets/issues/165
620,758,221
MDU6SXNzdWU2MjA3NTgyMjE=
165
ANLI
{ "login": "douwekiela", "id": 6024930, "node_id": "MDQ6VXNlcjYwMjQ5MzA=", "avatar_url": "https://avatars.githubusercontent.com/u/6024930?v=4", "gravatar_id": "", "url": "https://api.github.com/users/douwekiela", "html_url": "https://github.com/douwekiela", "followers_url": "https://api.github.com/users...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,589,874,657,000
1,589,977,387,000
1,589,977,387,000
NONE
null
Can I recommend the following: For ANLI, use https://github.com/facebookresearch/anli. As that paper says, "Our dataset is not to be confused with abductive NLI (Bhagavatula et al., 2019), which calls itself αNLI, or ART.". Indeed, the paper cited under what is currently called anli says in the abstract "We int...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/165/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/165/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/164
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/164/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/164/comments
https://api.github.com/repos/huggingface/datasets/issues/164/events
https://github.com/huggingface/datasets/issues/164
620,540,250
MDU6SXNzdWU2MjA1NDAyNTA=
164
Add Spanish POR and NER Datasets
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Hello @mrm8488, are these datasets official datasets published in an NLP/CL/ML venue?", "What about this one: https://github.com/ccasimiro88/TranslateAlignRetrieve?" ]
1,589,840,301,000
1,590,424,125,000
1,590,424,125,000
NONE
null
Hi guys, In order to cover multilingual support a little step could be adding standard Datasets used for Spanish NER and POS tasks. I can provide it in raw and preprocessed formats.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/164/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/164/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/163
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/163/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/163/comments
https://api.github.com/repos/huggingface/datasets/issues/163/events
https://github.com/huggingface/datasets/issues/163
620,534,307
MDU6SXNzdWU2MjA1MzQzMDc=
163
[Feature request] Add cos-e v1.0
{ "login": "sarahwie", "id": 8027676, "node_id": "MDQ6VXNlcjgwMjc2NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarahwie", "html_url": "https://github.com/sarahwie", "followers_url": "https://api.github.com/users/sarah...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Sounds good, @mariamabarham do you want to give a look?\r\nI think we should have two configurations so we can allow either version of the dataset to be loaded with the `1.0` version being the default maybe.\r\n\r\nCc some authors of the great cos-e: @nazneenrajani @bmccann", "cos_e v1.0 is related to CQA v1.0 b...
1,589,839,526,000
1,592,349,325,000
1,592,333,526,000
NONE
null
I noticed the second release of cos-e (v1.11) is included in this repo. I wanted to request inclusion of v1.0, since this is the version on which results are reported on in [the paper](https://www.aclweb.org/anthology/P19-1487/), and v1.11 has noted [annotation](https://github.com/salesforce/cos-e/issues/2) [issues](ht...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/163/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/163/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/162
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/162/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/162/comments
https://api.github.com/repos/huggingface/datasets/issues/162/events
https://github.com/huggingface/datasets/pull/162
620,513,554
MDExOlB1bGxSZXF1ZXN0NDE5NzQ4Mzky
162
fix prev files hash in map
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Awesome! ", "Hi, yes, this seems to fix #160 -- I cloned the branch locally and verified", "Perfect then :)" ]
1,589,836,851,000
1,589,837,781,000
1,589,837,780,000
MEMBER
null
Fix the `.map` issue in #160. This makes sure it takes the previous files when computing the hash.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/162/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/162/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/162", "html_url": "https://github.com/huggingface/datasets/pull/162", "diff_url": "https://github.com/huggingface/datasets/pull/162.diff", "patch_url": "https://github.com/huggingface/datasets/pull/162.patch", "merged_at": 1589837780000 }
true
https://api.github.com/repos/huggingface/datasets/issues/161
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/161/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/161/comments
https://api.github.com/repos/huggingface/datasets/issues/161/events
https://github.com/huggingface/datasets/issues/161
620,487,535
MDU6SXNzdWU2MjA0ODc1MzU=
161
Discussion on version identifier & MockDataLoaderManager for test data
{ "login": "EntilZha", "id": 1382460, "node_id": "MDQ6VXNlcjEzODI0NjA=", "avatar_url": "https://avatars.githubusercontent.com/u/1382460?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EntilZha", "html_url": "https://github.com/EntilZha", "followers_url": "https://api.github.com/users/Entil...
[ { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "follo...
null
[ "usually you can replace `download` in your dataset script with `download_and_prepare()` - could you share the code for your dataset here? :-) ", "I have an initial version here: https://github.com/EntilZha/nlp/tree/master/datasets/qanta Thats pretty close to what I'll do as a PR, but still want to do some more s...
1,589,833,890,000
1,590,343,803,000
null
CONTRIBUTOR
null
Hi, I'm working on adding a dataset and ran into an error due to `download` not being defined on `MockDataLoaderManager`, but being defined in `nlp/utils/download_manager.py`. The readme step running this: `RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_localmydatasetname` triggers ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/161/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/160
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/160/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/160/comments
https://api.github.com/repos/huggingface/datasets/issues/160/events
https://github.com/huggingface/datasets/issues/160
620,448,236
MDU6SXNzdWU2MjA0NDgyMzY=
160
caching in map causes same result to be returned for train, validation and test
{ "login": "dpressel", "id": 247881, "node_id": "MDQ6VXNlcjI0Nzg4MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/247881?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dpressel", "html_url": "https://github.com/dpressel", "followers_url": "https://api.github.com/users/dpresse...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "Hi @dpressel, \r\n\r\nthanks for posting your issue! Can you maybe add a complete code snippet that we can copy paste to reproduce the error? For example, I'm not sure where the variable `train_set` comes from in your code and it seems like you are loading multiple datasets at once? ", "Hi, the full example was...
1,589,829,723,000
1,589,837,780,000
1,589,837,780,000
NONE
null
hello, I am working on a program that uses the `nlp` library with the `SST2` dataset. The rough outline of the program is: ``` import nlp as nlp_datasets ... parser.add_argument('--dataset', help='HuggingFace Datasets id', default=['glue', 'sst2'], nargs='+') ... dataset = nlp_datasets.load_dataset(*args....
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/160/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/160/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/159
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/159/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/159/comments
https://api.github.com/repos/huggingface/datasets/issues/159/events
https://github.com/huggingface/datasets/issues/159
620,420,700
MDU6SXNzdWU2MjA0MjA3MDA=
159
How can we add more datasets to nlp library?
{ "login": "Tahsin-Mayeesha", "id": 17886829, "node_id": "MDQ6VXNlcjE3ODg2ODI5", "avatar_url": "https://avatars.githubusercontent.com/u/17886829?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tahsin-Mayeesha", "html_url": "https://github.com/Tahsin-Mayeesha", "followers_url": "https://api...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Found it. https://github.com/huggingface/nlp/tree/master/datasets" ]
1,589,826,931,000
1,589,827,028,000
1,589,827,027,000
NONE
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/159/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/158
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/158/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/158/comments
https://api.github.com/repos/huggingface/datasets/issues/158/events
https://github.com/huggingface/datasets/pull/158
620,396,658
MDExOlB1bGxSZXF1ZXN0NDE5NjUyNTQy
158
add Toronto Books Corpus
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.githu...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,589,824,485,000
1,591,861,755,000
1,589,873,696,000
CONTRIBUTOR
null
This PR adds the Toronto Books Corpus. . It on consider TMX and plain text files (Moses) defined in the table **Statistics and TMX/Moses Downloads** [here](http://opus.nlpl.eu/Books.php )
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/158/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/158", "html_url": "https://github.com/huggingface/datasets/pull/158", "diff_url": "https://github.com/huggingface/datasets/pull/158.diff", "patch_url": "https://github.com/huggingface/datasets/pull/158.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/157
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/157/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/157/comments
https://api.github.com/repos/huggingface/datasets/issues/157/events
https://github.com/huggingface/datasets/issues/157
620,356,542
MDU6SXNzdWU2MjAzNTY1NDI=
157
nlp.load_dataset() gives "TypeError: list_() takes exactly one argument (2 given)"
{ "login": "saahiluppal", "id": 47444392, "node_id": "MDQ6VXNlcjQ3NDQ0Mzky", "avatar_url": "https://avatars.githubusercontent.com/u/47444392?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saahiluppal", "html_url": "https://github.com/saahiluppal", "followers_url": "https://api.github.com/...
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "follo...
null
[ "You can just run: \r\n`val = nlp.load_dataset('squad')` \r\n\r\nif you want to have just the validation script you can also do:\r\n\r\n`val = nlp.load_dataset('squad', split=\"validation\")`", "If you want to load a local dataset, make sure you include a `./` before the folder name. ", "This happens by just do...
1,589,820,398,000
1,591,344,538,000
1,591,344,538,000
NONE
null
I'm trying to load datasets from nlp but there seems to have error saying "TypeError: list_() takes exactly one argument (2 given)" gist can be found here https://gist.github.com/saahiluppal/c4b878f330b10b9ab9762bc0776c0a6a
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/157/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/156
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/156/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/156/comments
https://api.github.com/repos/huggingface/datasets/issues/156/events
https://github.com/huggingface/datasets/issues/156
620,263,687
MDU6SXNzdWU2MjAyNjM2ODc=
156
SyntaxError with WMT datasets
{ "login": "tomhosking", "id": 9419158, "node_id": "MDQ6VXNlcjk0MTkxNTg=", "avatar_url": "https://avatars.githubusercontent.com/u/9419158?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomhosking", "html_url": "https://github.com/tomhosking", "followers_url": "https://api.github.com/users...
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "follo...
null
[ "Jeez - don't know what happened there :D Should be fixed now! \r\n\r\nThanks a lot for reporting this @tomhosking !", "Hi @patrickvonplaten!\r\n\r\nI'm now getting the below error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError ...
1,589,812,698,000
1,595,522,515,000
1,595,522,515,000
NONE
null
The following snippet produces a syntax error: ``` import nlp dataset = nlp.load_dataset('wmt14') print(dataset['train'][0]) ``` ``` Traceback (most recent call last): File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code exec(code_obj, self....
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/156/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/156/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/155
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/155/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/155/comments
https://api.github.com/repos/huggingface/datasets/issues/155/events
https://github.com/huggingface/datasets/pull/155
620,067,946
MDExOlB1bGxSZXF1ZXN0NDE5Mzg1ODM0
155
Include more links in README, fix typos
{ "login": "Bharat123rox", "id": 13381361, "node_id": "MDQ6VXNlcjEzMzgxMzYx", "avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bharat123rox", "html_url": "https://github.com/Bharat123rox", "followers_url": "https://api.github.c...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "I fixed a conflict :) thanks !" ]
1,589,795,228,000
1,590,654,717,000
1,590,654,717,000
CONTRIBUTOR
null
Include more links and fix typos in README
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/155/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/155/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/155", "html_url": "https://github.com/huggingface/datasets/pull/155", "diff_url": "https://github.com/huggingface/datasets/pull/155.diff", "patch_url": "https://github.com/huggingface/datasets/pull/155.patch", "merged_at": 1590654717000 }
true
https://api.github.com/repos/huggingface/datasets/issues/154
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/154/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/154/comments
https://api.github.com/repos/huggingface/datasets/issues/154/events
https://github.com/huggingface/datasets/pull/154
620,059,066
MDExOlB1bGxSZXF1ZXN0NDE5Mzc4Mzgw
154
add Ubuntu Dialogs Corpus datasets
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.githu...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,589,794,488,000
1,589,796,748,000
1,589,796,747,000
CONTRIBUTOR
null
This PR adds the Ubuntu Dialog Corpus datasets version 2.0.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/154/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/154", "html_url": "https://github.com/huggingface/datasets/pull/154", "diff_url": "https://github.com/huggingface/datasets/pull/154.diff", "patch_url": "https://github.com/huggingface/datasets/pull/154.patch", "merged_at": 1589796747000 }
true
https://api.github.com/repos/huggingface/datasets/issues/153
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/153/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/153/comments
https://api.github.com/repos/huggingface/datasets/issues/153/events
https://github.com/huggingface/datasets/issues/153
619,972,246
MDU6SXNzdWU2MTk5NzIyNDY=
153
Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomw...
[ { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "As @yoavgo suggested, there should be the possibility to call a function like nlp.bib that outputs all bibtex ref from the datasets and models actually used and eventually nlp.bib.forreadme that would output the same info + versions numbers so they can be included in a readme.md file.", "Actually, double checki...
1,589,786,662,000
1,589,836,696,000
null
MEMBER
null
Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly referenced and the citation/specific homepage/etc are very visible and accessibl...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/153/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/153/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/152
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/152/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/152/comments
https://api.github.com/repos/huggingface/datasets/issues/152/events
https://github.com/huggingface/datasets/pull/152
619,971,900
MDExOlB1bGxSZXF1ZXN0NDE5MzA4OTE2
152
Add GLUE config name check
{ "login": "Bharat123rox", "id": 13381361, "node_id": "MDQ6VXNlcjEzMzgxMzYx", "avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bharat123rox", "html_url": "https://github.com/Bharat123rox", "followers_url": "https://api.github.c...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "If tests are being added, any guidance on where to add tests would be helpful!\r\n\r\nTagging @thomwolf for review", "Looks good to me. Is this compatible with the way we are doing tests right now @patrickvonplaten ?", "If the tests pass it should be fine :-) \r\n\r\n@Bharat123rox could you check whether the t...
1,589,786,623,000
1,590,617,352,000
1,590,617,352,000
CONTRIBUTOR
null
Fixes #130 by adding a name check to the Glue class
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/152/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/152/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/152", "html_url": "https://github.com/huggingface/datasets/pull/152", "diff_url": "https://github.com/huggingface/datasets/pull/152.diff", "patch_url": "https://github.com/huggingface/datasets/pull/152.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/151
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/151/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/151/comments
https://api.github.com/repos/huggingface/datasets/issues/151/events
https://github.com/huggingface/datasets/pull/151
619,968,480
MDExOlB1bGxSZXF1ZXN0NDE5MzA2MTYz
151
Fix JSON tests.
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", ...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,589,786,258,000
1,589,786,512,000
1,589,786,511,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/151/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/151/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/151", "html_url": "https://github.com/huggingface/datasets/pull/151", "diff_url": "https://github.com/huggingface/datasets/pull/151.diff", "patch_url": "https://github.com/huggingface/datasets/pull/151.patch", "merged_at": 1589786511000 }
true
https://api.github.com/repos/huggingface/datasets/issues/150
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/150/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/150/comments
https://api.github.com/repos/huggingface/datasets/issues/150/events
https://github.com/huggingface/datasets/pull/150
619,809,645
MDExOlB1bGxSZXF1ZXN0NDE5MTgyODU4
150
Add WNUT 17 NER dataset
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "The PR looks awesome! \r\nSince you have already added a dataset I imagine the tests as described in 5. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset all pass, right @stefan-it ?\r\n\r\nI think we are then good to merge this :-) @lhoestq ", "Nice !\r\n\r\nOne thing though...
1,589,753,944,000
1,590,525,479,000
1,590,525,479,000
CONTRIBUTOR
null
Hi, this PR adds the WNUT 17 dataset to `nlp`. > Emerging and Rare entity recognition > This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarisati...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/150/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/150/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/150", "html_url": "https://github.com/huggingface/datasets/pull/150", "diff_url": "https://github.com/huggingface/datasets/pull/150.diff", "patch_url": "https://github.com/huggingface/datasets/pull/150.patch", "merged_at": 1590525479000 }
true
https://api.github.com/repos/huggingface/datasets/issues/149
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/149/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/149/comments
https://api.github.com/repos/huggingface/datasets/issues/149/events
https://github.com/huggingface/datasets/issues/149
619,735,739
MDU6SXNzdWU2MTk3MzU3Mzk=
149
[Feature request] Add Ubuntu Dialogue Corpus dataset
{ "login": "danth", "id": 28959268, "node_id": "MDQ6VXNlcjI4OTU5MjY4", "avatar_url": "https://avatars.githubusercontent.com/u/28959268?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danth", "html_url": "https://github.com/danth", "followers_url": "https://api.github.com/users/danth/follow...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "@AlphaMycelium the Ubuntu Dialogue Corpus [version 2]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator) is added. Note that it requires a manual download by following the download instructions in the [repos]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator).\r\nMaybe we can close this issue for...
1,589,730,159,000
1,589,821,306,000
1,589,821,306,000
NONE
null
https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/149/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/149/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/148
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/148/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/148/comments
https://api.github.com/repos/huggingface/datasets/issues/148/events
https://github.com/huggingface/datasets/issues/148
619,590,555
MDU6SXNzdWU2MTk1OTA1NTU=
148
_download_and_prepare() got an unexpected keyword argument 'verify_infos'
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.c...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Same error for dataset 'wiki40b'", "Should be fixed on master :)" ]
1,589,680,133,000
1,589,787,513,000
1,589,787,513,000
CONTRIBUTOR
null
# Reproduce In Colab, ``` %pip install -q nlp %pip install -q apache_beam mwparserfromhell dataset = nlp.load_dataset('wikipedia') ``` get ``` Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/w...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/148/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/148/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/147
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/147/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/147/comments
https://api.github.com/repos/huggingface/datasets/issues/147/events
https://github.com/huggingface/datasets/issues/147
619,581,907
MDU6SXNzdWU2MTk1ODE5MDc=
147
Error with sklearn train_test_split
{ "login": "ClonedOne", "id": 6853743, "node_id": "MDQ6VXNlcjY4NTM3NDM=", "avatar_url": "https://avatars.githubusercontent.com/u/6853743?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ClonedOne", "html_url": "https://github.com/ClonedOne", "followers_url": "https://api.github.com/users/Cl...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Indeed. Probably we will want to have a similar method directly in the library", "Related: #166 " ]
1,589,675,304,000
1,592,497,403,000
1,592,497,403,000
NONE
null
It would be nice if we could use sklearn `train_test_split` to quickly generate subsets from the dataset objects returned by `nlp.load_dataset`. At the moment the code: ```python data = nlp.load_dataset('imdb', cache_dir=data_cache) f_half, s_half = train_test_split(data['train'], test_size=0.5, random_state=seed)...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/147/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/147/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/146
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/146/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/146/comments
https://api.github.com/repos/huggingface/datasets/issues/146/events
https://github.com/huggingface/datasets/pull/146
619,564,653
MDExOlB1bGxSZXF1ZXN0NDE5MDI5MjUx
146
Add BERTScore to metrics
{ "login": "felixgwu", "id": 7753366, "node_id": "MDQ6VXNlcjc3NTMzNjY=", "avatar_url": "https://avatars.githubusercontent.com/u/7753366?v=4", "gravatar_id": "", "url": "https://api.github.com/users/felixgwu", "html_url": "https://github.com/felixgwu", "followers_url": "https://api.github.com/users/felix...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,589,666,979,000
1,589,754,130,000
1,589,754,129,000
CONTRIBUTOR
null
This PR adds [BERTScore](https://arxiv.org/abs/1904.09675) to metrics. Here is an example of how to use it. ```sh import nlp bertscore = nlp.load_metric('metrics/bertscore') # or simply nlp.load_metric('bertscore') after this is added to huggingface's s3 bucket predictions = ['example', 'fruit'] references = [[...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/146/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/146/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/146", "html_url": "https://github.com/huggingface/datasets/pull/146", "diff_url": "https://github.com/huggingface/datasets/pull/146.diff", "patch_url": "https://github.com/huggingface/datasets/pull/146.patch", "merged_at": 1589754129000 }
true
https://api.github.com/repos/huggingface/datasets/issues/145
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/145/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/145/comments
https://api.github.com/repos/huggingface/datasets/issues/145/events
https://github.com/huggingface/datasets/pull/145
619,480,549
MDExOlB1bGxSZXF1ZXN0NDE4OTcxMjg0
145
[AWS Tests] Follow-up PR from #144
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,589,637,226,000
1,589,637,263,000
1,589,637,262,000
MEMBER
null
I forgot to add this line in PR #145 .
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/145/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/145/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/145", "html_url": "https://github.com/huggingface/datasets/pull/145", "diff_url": "https://github.com/huggingface/datasets/pull/145.diff", "patch_url": "https://github.com/huggingface/datasets/pull/145.patch", "merged_at": 1589637262000 }
true
https://api.github.com/repos/huggingface/datasets/issues/144
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/144/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/144/comments
https://api.github.com/repos/huggingface/datasets/issues/144/events
https://github.com/huggingface/datasets/pull/144
619,477,367
MDExOlB1bGxSZXF1ZXN0NDE4OTY5NjA1
144
[AWS tests] AWS test should not run for canonical datasets
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,589,636,370,000
1,589,636,674,000
1,589,636,673,000
MEMBER
null
AWS tests should in general not run for canonical datasets. Only local tests will run in this case. This way a PR is able to pass when adding a new dataset. This PR changes to logic to the following: 1) All datasets that are present in `nlp/datasets` are tested only locally. This way when one adds a canonical da...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/144/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/144/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/144", "html_url": "https://github.com/huggingface/datasets/pull/144", "diff_url": "https://github.com/huggingface/datasets/pull/144.diff", "patch_url": "https://github.com/huggingface/datasets/pull/144.patch", "merged_at": 1589636673000 }
true
https://api.github.com/repos/huggingface/datasets/issues/143
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/143/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/143/comments
https://api.github.com/repos/huggingface/datasets/issues/143/events
https://github.com/huggingface/datasets/issues/143
619,457,641
MDU6SXNzdWU2MTk0NTc2NDE=
143
ArrowTypeError in squad metrics
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/...
[ { "id": 2067393914, "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug", "name": "metric bug", "color": "25b21e", "default": false, "description": "A bug in a metric script" } ]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "There was an issue in the format, thanks.\r\nNow you can do\r\n```python3\r\nsquad_dset = nlp.load_dataset(\"squad\")\r\nsquad_metric = nlp.load_metric(\"/Users/quentinlhoest/Desktop/hf/nlp-bis/metrics/squad\")\r\npredictions = [\r\n {\"id\": v[\"id\"], \"prediction_text\": v[\"answers\"][\"text\"][0]} # take ...
1,589,630,797,000
1,590,154,732,000
1,590,154,608,000
MEMBER
null
`squad_metric.compute` is giving following error ``` ArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type ``` This is how my predictions and references lo...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/143/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/143/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/142
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/142/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/142/comments
https://api.github.com/repos/huggingface/datasets/issues/142/events
https://github.com/huggingface/datasets/pull/142
619,450,068
MDExOlB1bGxSZXF1ZXN0NDE4OTU0OTc1
142
[WMT] Add all wmt
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[]
1,589,628,526,000
1,589,717,901,000
1,589,717,900,000
MEMBER
null
This PR adds all wmt datasets scripts. At the moment the script is **not** functional for the language pairs "cs-en", "ru-en", "hi-en" because apparently it takes up to a week to get the manual data for these datasets: see http://ufal.mff.cuni.cz/czeng. The datasets are fully functional though for the "big" languag...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/142/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/142/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/142", "html_url": "https://github.com/huggingface/datasets/pull/142", "diff_url": "https://github.com/huggingface/datasets/pull/142.diff", "patch_url": "https://github.com/huggingface/datasets/pull/142.patch", "merged_at": 1589717900000 }
true
https://api.github.com/repos/huggingface/datasets/issues/141
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/141/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/141/comments
https://api.github.com/repos/huggingface/datasets/issues/141/events
https://github.com/huggingface/datasets/pull/141
619,447,090
MDExOlB1bGxSZXF1ZXN0NDE4OTUzMzQw
141
[Clean up] remove bogus folder
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "Same for the dataset_infos.json at the project root no ?", "Sorry guys, I haven't noticed. Thank you for mentioning it." ]
1,589,627,622,000
1,589,635,467,000
1,589,635,466,000
MEMBER
null
@mariamabarham - I think you accidentally placed it there.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/141/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/141/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/141", "html_url": "https://github.com/huggingface/datasets/pull/141", "diff_url": "https://github.com/huggingface/datasets/pull/141.diff", "patch_url": "https://github.com/huggingface/datasets/pull/141.patch", "merged_at": 1589635465000 }
true
https://api.github.com/repos/huggingface/datasets/issues/140
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/140/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/140/comments
https://api.github.com/repos/huggingface/datasets/issues/140/events
https://github.com/huggingface/datasets/pull/140
619,443,613
MDExOlB1bGxSZXF1ZXN0NDE4OTUxMzg4
140
[Tests] run local tests as default
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "You are right and I think those are usual best practice :) I'm 100% fine with this^^", "Merging this for now to unblock other PRs." ]
1,589,626,566,000
1,589,635,304,000
1,589,635,303,000
MEMBER
null
This PR also enables local tests by default I think it's safer for now to enable both local and aws tests for every commit. The problem currently is that when we do a PR to add a dataset, the dataset is not yet on AWS on therefore not tested on the PR itself. Thus the PR will always be green even if the datasets are...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/140/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/140/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/140", "html_url": "https://github.com/huggingface/datasets/pull/140", "diff_url": "https://github.com/huggingface/datasets/pull/140.diff", "patch_url": "https://github.com/huggingface/datasets/pull/140.patch", "merged_at": 1589635303000 }
true
https://api.github.com/repos/huggingface/datasets/issues/139
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/139/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/139/comments
https://api.github.com/repos/huggingface/datasets/issues/139/events
https://github.com/huggingface/datasets/pull/139
619,327,409
MDExOlB1bGxSZXF1ZXN0NDE4ODc4NzMy
139
Add GermEval 2014 NER dataset
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/...
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "follo...
null
[ "Had really fun playing around with this new library :heart: ", "That's awesome - thanks @stefan-it :-) \r\n\r\nCould you maybe rebase to master and check if all dummy data tests are fine. I should have included the local tests directly in the test suite so that all PRs are fully checked: #140 - sorry :D ", "@p...
1,589,586,129,000
1,589,637,397,000
1,589,637,382,000
CONTRIBUTOR
null
Hi, this PR adds the GermEval 2014 NER dataset 😃 > The GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation [1] with the following properties: > - The data was sampled from German Wikipedia and News Corpora as a collection of citations. > - The dataset covers over 31,000...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/139/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/139/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/139", "html_url": "https://github.com/huggingface/datasets/pull/139", "diff_url": "https://github.com/huggingface/datasets/pull/139.diff", "patch_url": "https://github.com/huggingface/datasets/pull/139.patch", "merged_at": 1589637382000 }
true
https://api.github.com/repos/huggingface/datasets/issues/138
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/138/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/138/comments
https://api.github.com/repos/huggingface/datasets/issues/138/events
https://github.com/huggingface/datasets/issues/138
619,225,191
MDU6SXNzdWU2MTkyMjUxOTE=
138
Consider renaming to nld
{ "login": "honnibal", "id": 8059750, "node_id": "MDQ6VXNlcjgwNTk3NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/8059750?v=4", "gravatar_id": "", "url": "https://api.github.com/users/honnibal", "html_url": "https://github.com/honnibal", "followers_url": "https://api.github.com/users/honni...
[ { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
closed
false
{ "login": null, "id": null, "node_id": null, "avatar_url": null, "gravatar_id": null, "url": null, "html_url": null, "followers_url": null, "following_url": null, "gists_url": null, "starred_url": null, "subscriptions_url": null, "organizations_url": null, "repos_url": null, "events_url":...
[]
null
[ "I would suggest `nlds`. NLP is a very general, broad and ambiguous term, the library is not about NLP (as in processing) per se, it is about accessing Natural Language related datasets. So the name should reflect its purpose.\r\n", "Chiming in to second everything @honnibal said, and to add that I think the curr...
1,589,574,207,000
1,608,238,591,000
1,601,251,690,000
NONE
null
Hey :) Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing. The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/138/reactions", "total_count": 32, "+1": 32, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/138/timeline
null
null
{ "url": null, "html_url": null, "diff_url": null, "patch_url": null, "merged_at": null }
true