url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.28B
node_id
stringlengths
18
32
number
int64
1
4.56k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
list
created_at
int64
1,587B
1,656B
updated_at
int64
1,587B
1,656B
closed_at
int64
1,587B
1,656B
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
1 value
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/298
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/298/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/298/comments
https://api.github.com/repos/huggingface/datasets/issues/298/events
https://github.com/huggingface/datasets/pull/298
643,603,804
MDExOlB1bGxSZXF1ZXN0NDM4Mzc4MDM4
298
Add searchable datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "Looks very cool! Only looked at it superficially though", "Alright I think I've checked all your comments, thanks :)\r\n\r\nMoreover I just added a way to serialize faiss indexes.\r\nThis is important because for big datasets the index construction can take some time.\r\n\r\nExamples:\r\n\r\n```python\r\nds = nl...
1,592,897,583,000
1,593,157,844,000
1,593,157,843,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/298", "html_url": "https://github.com/huggingface/datasets/pull/298", "diff_url": "https://github.com/huggingface/datasets/pull/298.diff", "patch_url": "https://github.com/huggingface/datasets/pull/298.patch", "merged_at": 1593157843000 }
# Better support for Numpy format + Add Indexed Datasets I was working on adding Indexed Datasets but in the meantime I had to also add more support for Numpy arrays in the lib. ## Better support for Numpy format New features: - New fast method to convert Numpy arrays from Arrow structure (up to x100 speed up...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/298/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/298/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/297
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/297/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/297/comments
https://api.github.com/repos/huggingface/datasets/issues/297/events
https://github.com/huggingface/datasets/issues/297
643,444,625
MDU6SXNzdWU2NDM0NDQ2MjU=
297
Error in Demo for Specific Datasets
{ "login": "s-jse", "id": 60150701, "node_id": "MDQ6VXNlcjYwMTUwNzAx", "avatar_url": "https://avatars.githubusercontent.com/u/60150701?v=4", "gravatar_id": "", "url": "https://api.github.com/users/s-jse", "html_url": "https://github.com/s-jse", "followers_url": "https://api.github.com/users/s-jse/follow...
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "Thanks for reporting these errors :)\r\n\r\nI can actually see two issues here.\r\n\r\nFirst, datasets like `natural_questions` require apache_beam to be processed. Right now the import is not at the right place so we have this error message. However, even the imports are fixed, the nlp viewer doesn't actually hav...
1,592,872,722,000
1,595,007,786,000
1,595,007,786,000
NONE
null
null
null
Selecting `natural_questions` or `newsroom` dataset in the online demo results in an error similar to the following. ![image](https://user-images.githubusercontent.com/60150701/85347842-ac861900-b4ae-11ea-98c4-a53a00934783.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/297/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/296
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/296/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/296/comments
https://api.github.com/repos/huggingface/datasets/issues/296/events
https://github.com/huggingface/datasets/issues/296
643,423,717
MDU6SXNzdWU2NDM0MjM3MTc=
296
snli -1 labels
{ "login": "jxmorris12", "id": 13238952, "node_id": "MDQ6VXNlcjEzMjM4OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jxmorris12", "html_url": "https://github.com/jxmorris12", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "@jxmorris12 , we use `-1` to label examples for which `gold label` is missing (`gold label = -` in the original dataset). ", "Thanks @mariamabarham! so the original dataset is missing some labels? That is weird. Is standard practice just to discard those examples training/eval?", "Yes the original dataset is...
1,592,868,810,000
1,592,923,319,000
1,592,923,318,000
CONTRIBUTOR
null
null
null
I'm trying to train a model on the SNLI dataset. Why does it have so many -1 labels? ``` import nlp from collections import Counter data = nlp.load_dataset('snli')['train'] print(Counter(data['label'])) Counter({0: 183416, 2: 183187, 1: 182764, -1: 785}) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/296/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/295
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/295/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/295/comments
https://api.github.com/repos/huggingface/datasets/issues/295/events
https://github.com/huggingface/datasets/issues/295
643,245,412
MDU6SXNzdWU2NDMyNDU0MTI=
295
Improve input warning for evaluation metrics
{ "login": "Tiiiger", "id": 19514537, "node_id": "MDQ6VXNlcjE5NTE0NTM3", "avatar_url": "https://avatars.githubusercontent.com/u/19514537?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tiiiger", "html_url": "https://github.com/Tiiiger", "followers_url": "https://api.github.com/users/Tiiige...
[]
closed
false
null
[]
null
[]
1,592,846,937,000
1,592,923,657,000
1,592,923,657,000
NONE
null
null
null
Hi, I am the author of `bert_score`. Recently, we received [ an issue ](https://github.com/Tiiiger/bert_score/issues/62) reporting a problem in using `bert_score` from the `nlp` package (also see #238 in this repo). After looking into this, I realized that the problem arises from the format `nlp.Metric` takes inpu...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/295/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/295/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/294
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/294/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/294/comments
https://api.github.com/repos/huggingface/datasets/issues/294/events
https://github.com/huggingface/datasets/issues/294
643,181,179
MDU6SXNzdWU2NDMxODExNzk=
294
Cannot load arxiv dataset on MacOS?
{ "login": "JohnGiorgi", "id": 8917831, "node_id": "MDQ6VXNlcjg5MTc4MzE=", "avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JohnGiorgi", "html_url": "https://github.com/JohnGiorgi", "followers_url": "https://api.github.com/users...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "I couldn't replicate this issue on my macbook :/\r\nCould you try to play with different encodings in `with open(path, encoding=...) as f` in scientific_papers.py:L108 ?", "I was able to track down the file causing the problem by adding the following to `scientific_papers.py` (starting at line 116):\r\n\r\n```py...
1,592,840,815,000
1,593,530,710,000
1,593,530,710,000
CONTRIBUTOR
null
null
null
I am having trouble loading the `"arxiv"` config from the `"scientific_papers"` dataset on MacOS. When I try loading the dataset with: ```python arxiv = nlp.load_dataset("scientific_papers", "arxiv") ``` I get the following stack trace: ```bash JSONDecodeError Traceback (most recen...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/294/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/293/comments
https://api.github.com/repos/huggingface/datasets/issues/293/events
https://github.com/huggingface/datasets/pull/293
642,942,182
MDExOlB1bGxSZXF1ZXN0NDM3ODM1ODI4
293
Don't test community datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,592,820,933,000
1,592,824,020,000
1,592,824,019,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/293", "html_url": "https://github.com/huggingface/datasets/pull/293", "diff_url": "https://github.com/huggingface/datasets/pull/293.diff", "patch_url": "https://github.com/huggingface/datasets/pull/293.patch", "merged_at": 1592824019000 }
This PR disables testing for community datasets on aws. It should fix the CI that is currently failing.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/293/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/292
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/292/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/292/comments
https://api.github.com/repos/huggingface/datasets/issues/292/events
https://github.com/huggingface/datasets/pull/292
642,897,797
MDExOlB1bGxSZXF1ZXN0NDM3Nzk4NTM2
292
Update metadata for x_stance dataset
{ "login": "jvamvas", "id": 5830820, "node_id": "MDQ6VXNlcjU4MzA4MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/5830820?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jvamvas", "html_url": "https://github.com/jvamvas", "followers_url": "https://api.github.com/users/jvamvas/...
[]
closed
false
null
[]
null
[ "Great! Thanks @jvamvas for these updates.\r\n", "I have fixed a warning. The remaining test failure is due to an unrelated dataset.", "We just fixed the other dataset on master. Could you rebase from master and push to rerun the CI ?" ]
1,592,817,206,000
1,592,899,644,000
1,592,899,644,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/292", "html_url": "https://github.com/huggingface/datasets/pull/292", "diff_url": "https://github.com/huggingface/datasets/pull/292.diff", "patch_url": "https://github.com/huggingface/datasets/pull/292.patch", "merged_at": 1592899644000 }
Thank you for featuring the x_stance dataset in your library. This PR updates some metadata: - Citation: Replace preprint with proceedings - URL: Use a URL with long-term availability
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/292/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/291
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/291/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/291/comments
https://api.github.com/repos/huggingface/datasets/issues/291/events
https://github.com/huggingface/datasets/pull/291
642,688,450
MDExOlB1bGxSZXF1ZXN0NDM3NjM1NjMy
291
break statement not required
{ "login": "mayurnewase", "id": 12967587, "node_id": "MDQ6VXNlcjEyOTY3NTg3", "avatar_url": "https://avatars.githubusercontent.com/u/12967587?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mayurnewase", "html_url": "https://github.com/mayurnewase", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
[ "I guess,test failing due to connection error?", "We just fixed the other dataset on master. Could you rebase from master and push to rerun the CI ?", "If I'm not wrong this function returns None if no main class was found.\r\nI think it makes things less clear not to have a return at the end of the function.\r...
1,592,790,055,000
1,592,935,078,000
1,592,905,022,000
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/291", "html_url": "https://github.com/huggingface/datasets/pull/291", "diff_url": "https://github.com/huggingface/datasets/pull/291.diff", "patch_url": "https://github.com/huggingface/datasets/pull/291.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/291/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/290
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/290/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/290/comments
https://api.github.com/repos/huggingface/datasets/issues/290/events
https://github.com/huggingface/datasets/issues/290
641,978,286
MDU6SXNzdWU2NDE5NzgyODY=
290
ConnectionError - Eli5 dataset download
{ "login": "JovanNj", "id": 8490096, "node_id": "MDQ6VXNlcjg0OTAwOTY=", "avatar_url": "https://avatars.githubusercontent.com/u/8490096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JovanNj", "html_url": "https://github.com/JovanNj", "followers_url": "https://api.github.com/users/JovanNj/...
[]
closed
false
null
[]
null
[ "It should ne fixed now, thanks for reporting this one :)\r\nIt was an issue on our google storage.\r\n\r\nLet me now if you're still facing this issue.", "It works now, thanks for prompt help!" ]
1,592,574,033,000
1,592,659,344,000
1,592,659,344,000
NONE
null
null
null
Hi, I have a problem with downloading Eli5 dataset. When typing `nlp.load_dataset('eli5')`, I get ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/eli5/LFQA_reddit/1.0.0/explain_like_im_five-train_eli5.arrow I would appreciate if you could help me with this issue.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/290/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/289
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/289/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/289/comments
https://api.github.com/repos/huggingface/datasets/issues/289/events
https://github.com/huggingface/datasets/pull/289
641,934,194
MDExOlB1bGxSZXF1ZXN0NDM3MDc0MTM3
289
update xsum
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
[ "Looks cool!\r\n@mariamabarham can you add a detailed description here what exactly is changed and how the user can load xsum now?", "And a rebase should solve the conflicts", "This is a super useful PR :-) @sshleifer - maybe you can take a look at the updated version of xsum if you can use it for your use case...
1,592,569,712,000
1,592,832,446,000
1,592,810,407,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/289", "html_url": "https://github.com/huggingface/datasets/pull/289", "diff_url": "https://github.com/huggingface/datasets/pull/289.diff", "patch_url": "https://github.com/huggingface/datasets/pull/289.patch", "merged_at": 1592810407000 }
This PR makes the following update to the xsum dataset: - Manual download is not required anymore - dataset can be loaded as follow: `nlp.load_dataset('xsum')` **Important** Instead of using on outdated url to download the data: "https://raw.githubusercontent.com/EdinburghNLP/XSum/master/XSum-Dataset/XSum...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/289/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/288/comments
https://api.github.com/repos/huggingface/datasets/issues/288/events
https://github.com/huggingface/datasets/issues/288
641,888,610
MDU6SXNzdWU2NDE4ODg2MTA=
288
Error at the first example in README: AttributeError: module 'dill' has no attribute '_dill'
{ "login": "wutong8023", "id": 14964542, "node_id": "MDQ6VXNlcjE0OTY0NTQy", "avatar_url": "https://avatars.githubusercontent.com/u/14964542?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wutong8023", "html_url": "https://github.com/wutong8023", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "It looks like the bug comes from `dill`. Which version of `dill` are you using ?", "Thank you. It is version 0.2.6, which version is better?", "0.2.6 is three years old now, maybe try a more recent one, e.g. the current 0.3.2 if you can?", "Thanks guys! I upgraded dill and it works.", "Awesome" ]
1,592,564,482,000
1,592,730,311,000
1,592,730,311,000
NONE
null
null
null
/Users/parasol_tree/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:469: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /Users/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/288/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/287
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/287/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/287/comments
https://api.github.com/repos/huggingface/datasets/issues/287/events
https://github.com/huggingface/datasets/pull/287
641,800,227
MDExOlB1bGxSZXF1ZXN0NDM2OTY0NTg0
287
fix squad_v2 metric
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,592,555,086,000
1,592,555,623,000
1,592,555,621,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/287", "html_url": "https://github.com/huggingface/datasets/pull/287", "diff_url": "https://github.com/huggingface/datasets/pull/287.diff", "patch_url": "https://github.com/huggingface/datasets/pull/287.patch", "merged_at": 1592555621000 }
Fix #280 The imports were wrong
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/287/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/286/comments
https://api.github.com/repos/huggingface/datasets/issues/286/events
https://github.com/huggingface/datasets/pull/286
641,585,758
MDExOlB1bGxSZXF1ZXN0NDM2NzkzMjI4
286
Add ANLI dataset.
{ "login": "easonnie", "id": 11016329, "node_id": "MDQ6VXNlcjExMDE2MzI5", "avatar_url": "https://avatars.githubusercontent.com/u/11016329?v=4", "gravatar_id": "", "url": "https://api.github.com/users/easonnie", "html_url": "https://github.com/easonnie", "followers_url": "https://api.github.com/users/eas...
[]
closed
false
null
[]
null
[ "Awesome!! Thanks @easonnie.\r\nLet's wait for additional reviews maybe from @lhoestq @patrickvonplaten @jplu" ]
1,592,519,250,000
1,592,828,607,000
1,592,828,607,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/286", "html_url": "https://github.com/huggingface/datasets/pull/286", "diff_url": "https://github.com/huggingface/datasets/pull/286.diff", "patch_url": "https://github.com/huggingface/datasets/pull/286.patch", "merged_at": 1592828606000 }
I completed all the steps in https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset and push the code for ANLI. Please let me know if there are any errors.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/286/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/285/comments
https://api.github.com/repos/huggingface/datasets/issues/285/events
https://github.com/huggingface/datasets/pull/285
641,360,702
MDExOlB1bGxSZXF1ZXN0NDM2NjAyMjk4
285
Consistent formatting of citations
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
[ "Circle CI shuold be green :-) " ]
1,592,497,523,000
1,592,813,365,000
1,592,813,364,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/285", "html_url": "https://github.com/huggingface/datasets/pull/285", "diff_url": "https://github.com/huggingface/datasets/pull/285.diff", "patch_url": "https://github.com/huggingface/datasets/pull/285.patch", "merged_at": 1592813363000 }
#283
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/285/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/284
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/284/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/284/comments
https://api.github.com/repos/huggingface/datasets/issues/284/events
https://github.com/huggingface/datasets/pull/284
641,337,217
MDExOlB1bGxSZXF1ZXN0NDM2NTgxODQ2
284
Fix manual download instructions
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
null
[]
null
[ "Verified that this works, thanks!", "But I get\r\n```python\r\nConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/./datasets/wmt16/wmt16.py\r\n```\r\nWhen I try from jupyter on brutasse or my mac. (the jupyter server is run from transformers).\r\n\r\n\r\nBoth machines c...
1,592,495,997,000
1,592,555,061,000
1,592,555,059,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/284", "html_url": "https://github.com/huggingface/datasets/pull/284", "diff_url": "https://github.com/huggingface/datasets/pull/284.diff", "patch_url": "https://github.com/huggingface/datasets/pull/284.patch", "merged_at": 1592555059000 }
This PR replaces the static `DatasetBulider` variable `MANUAL_DOWNLOAD_INSTRUCTIONS` by a property function `manual_download_instructions()`. Some datasets like XTREME and all WMT need the manual data dir only for a small fraction of the possible configs. After some brainstorming with @mariamabarham and @lhoestq...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/284/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/283
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/283/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/283/comments
https://api.github.com/repos/huggingface/datasets/issues/283/events
https://github.com/huggingface/datasets/issues/283
641,270,439
MDU6SXNzdWU2NDEyNzA0Mzk=
283
Consistent formatting of citations
{ "login": "srush", "id": 35882, "node_id": "MDQ6VXNlcjM1ODgy", "avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srush", "html_url": "https://github.com/srush", "followers_url": "https://api.github.com/users/srush/followers", "f...
[]
closed
false
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.githu...
[ { "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url"...
null
[]
1,592,491,725,000
1,592,847,046,000
1,592,847,046,000
CONTRIBUTOR
null
null
null
The citations are all of a different format, some have "```" and have text inside, others are proper bibtex. Can we make it so that they all are proper citations, i.e. parse by the bibtex spec: https://bibtexparser.readthedocs.io/en/master/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/283/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/282
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/282/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/282/comments
https://api.github.com/repos/huggingface/datasets/issues/282/events
https://github.com/huggingface/datasets/pull/282
641,217,759
MDExOlB1bGxSZXF1ZXN0NDM2NDgxNzMy
282
Update dataset_info from gcs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,592,487,675,000
1,592,497,492,000
1,592,497,491,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/282", "html_url": "https://github.com/huggingface/datasets/pull/282", "diff_url": "https://github.com/huggingface/datasets/pull/282.diff", "patch_url": "https://github.com/huggingface/datasets/pull/282.patch", "merged_at": 1592497491000 }
Some datasets are hosted on gcs (wikipedia for example). In this PR I make sure that, when a user loads such datasets, the file_instructions are built using the dataset_info.json from gcs and not from the info extracted from the local `dataset_infos.json` (the one that contain the info for each config). Indeed local fi...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/282/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/281/comments
https://api.github.com/repos/huggingface/datasets/issues/281/events
https://github.com/huggingface/datasets/issues/281
641,067,856
MDU6SXNzdWU2NDEwNjc4NTY=
281
Private/sensitive data
{ "login": "MFreidank", "id": 6368040, "node_id": "MDQ6VXNlcjYzNjgwNDA=", "avatar_url": "https://avatars.githubusercontent.com/u/6368040?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MFreidank", "html_url": "https://github.com/MFreidank", "followers_url": "https://api.github.com/users/MF...
[]
closed
false
null
[]
null
[ "Hi @MFreidank, you should already be able to load a dataset from local sources, indeed. (ping @lhoestq and @jplu)\r\n\r\nWe're also thinking about the ability to host private datasets on a hosted bucket with permission management, but that's further down the road.", "Hi @MFreidank, it is possible to load a datas...
1,592,473,647,000
1,592,658,912,000
1,592,658,912,000
NONE
null
null
null
Hi all, Thanks for this fantastic library, it makes it very easy to do prototyping for NLP projects interchangeably between TF/Pytorch. Unfortunately, there is data that cannot easily be shared publicly as it may contain sensitive information. Is there support/a plan to support such data with NLP, e.g. by readin...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/281/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/280/comments
https://api.github.com/repos/huggingface/datasets/issues/280/events
https://github.com/huggingface/datasets/issues/280
640,677,615
MDU6SXNzdWU2NDA2Nzc2MTU=
280
Error with SquadV2 Metrics
{ "login": "avinregmi", "id": 32203792, "node_id": "MDQ6VXNlcjMyMjAzNzky", "avatar_url": "https://avatars.githubusercontent.com/u/32203792?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avinregmi", "html_url": "https://github.com/avinregmi", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[]
1,592,421,054,000
1,592,555,621,000
1,592,555,621,000
NONE
null
null
null
I can't seem to import squad v2 metrics. **squad_metric = nlp.load_metric('squad_v2')** **This throws me an error.:** ``` ImportError Traceback (most recent call last) <ipython-input-8-170b6a170555> in <module> ----> 1 squad_metric = nlp.load_metric('squad_v2') ~/env/lib6...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/280/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/279
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/279/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/279/comments
https://api.github.com/repos/huggingface/datasets/issues/279/events
https://github.com/huggingface/datasets/issues/279
640,611,692
MDU6SXNzdWU2NDA2MTE2OTI=
279
Dataset Preprocessing Cache with .map() function not working as expected
{ "login": "sarahwie", "id": 8027676, "node_id": "MDQ6VXNlcjgwMjc2NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarahwie", "html_url": "https://github.com/sarahwie", "followers_url": "https://api.github.com/users/sarah...
[]
closed
false
null
[]
null
[ "When you're processing a dataset with `.map`, it checks whether it has already done this computation using a hash based on the function and the input (using some fancy serialization with `dill`). If you found that it doesn't work as expected in some cases, let us know !\r\n\r\nGiven that, you can still force to re...
1,592,414,241,000
1,625,607,808,000
1,618,789,429,000
NONE
null
null
null
I've been having issues with reproducibility when loading and processing datasets with the `.map` function. I was only able to resolve them by clearing all of the cache files on my system. Is there a way to disable using the cache when processing a dataset? As I make minor processing changes on the same dataset, I ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/279/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/278
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/278/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/278/comments
https://api.github.com/repos/huggingface/datasets/issues/278/events
https://github.com/huggingface/datasets/issues/278
640,518,917
MDU6SXNzdWU2NDA1MTg5MTc=
278
MemoryError when loading German Wikipedia
{ "login": "gregburman", "id": 4698028, "node_id": "MDQ6VXNlcjQ2OTgwMjg=", "avatar_url": "https://avatars.githubusercontent.com/u/4698028?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gregburman", "html_url": "https://github.com/gregburman", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
[ "Hi !\r\n\r\nAs you noticed, \"big\" datasets like Wikipedia require apache beam to be processed.\r\nHowever users usually don't have an apache beam runtime available (spark, dataflow, etc.) so our goal for this library is to also make available processed versions of these datasets, so that users can just download ...
1,592,406,381,000
1,592,571,182,000
1,592,571,182,000
NONE
null
null
null
Hi, first off let me say thank you for all the awesome work you're doing at Hugging Face across all your projects (NLP, Transformers, Tokenizers) - they're all amazing contributions to us working with NLP models :) I'm trying to download the German Wikipedia dataset as follows: ``` wiki = nlp.load_dataset("wikip...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/278/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/277
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/277/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/277/comments
https://api.github.com/repos/huggingface/datasets/issues/277/events
https://github.com/huggingface/datasets/issues/277
640,163,053
MDU6SXNzdWU2NDAxNjMwNTM=
277
Empty samples in glue/qqp
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
[ "We are only wrapping the original dataset.\r\n\r\nMaybe try to ask on the GLUE mailing list or reach out to the original authors?", "Tanks for the suggestion, I'll try to ask GLUE benchmark.\r\nI'll first close the issue, post the following up here afterwards, and reopen the issue if needed. " ]
1,592,373,292,000
1,592,698,905,000
1,592,698,905,000
CONTRIBUTOR
null
null
null
``` qqp = nlp.load_dataset('glue', 'qqp') print(qqp['train'][310121]) print(qqp['train'][362225]) ``` ``` {'question1': 'How can I create an Android app?', 'question2': '', 'label': 0, 'idx': 310137} {'question1': 'How can I develop android app?', 'question2': '', 'label': 0, 'idx': 362246} ``` Notice that que...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/277/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/276/comments
https://api.github.com/repos/huggingface/datasets/issues/276/events
https://github.com/huggingface/datasets/pull/276
639,490,858
MDExOlB1bGxSZXF1ZXN0NDM1MDY5Nzg5
276
Fix metric compute (original_instructions missing)
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "Awesome! This is working now:\r\n\r\n```python\r\nimport nlp \r\nseqeval = nlp.load_metric(\"seqeval\") \r\ny_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] \r\ny_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] ...
1,592,297,521,000
1,592,466,105,000
1,592,466,104,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/276", "html_url": "https://github.com/huggingface/datasets/pull/276", "diff_url": "https://github.com/huggingface/datasets/pull/276.diff", "patch_url": "https://github.com/huggingface/datasets/pull/276.patch", "merged_at": 1592466103000 }
When loading arrow data we added in cc8d250 a way to specify the instructions that were used to store them with the loaded dataset. However metrics load data the same way but don't need instructions (we use one single file). In this PR I just make `original_instructions` optional when reading files to load a `Datas...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/276/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/275/comments
https://api.github.com/repos/huggingface/datasets/issues/275/events
https://github.com/huggingface/datasets/issues/275
639,439,052
MDU6SXNzdWU2Mzk0MzkwNTI=
275
NonMatchingChecksumError when loading pubmed dataset
{ "login": "DavideStenner", "id": 48441753, "node_id": "MDQ6VXNlcjQ4NDQxNzUz", "avatar_url": "https://avatars.githubusercontent.com/u/48441753?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DavideStenner", "html_url": "https://github.com/DavideStenner", "followers_url": "https://api.githu...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "For some reason the files are not available for unauthenticated users right now (like the download service of this package). Instead of downloading the right files, it downloads the html of the error.\r\nAccording to the error it should be back again in 24h.\r\n\r\n![image](https://user-images.githubusercontent.co...
1,592,292,711,000
1,592,552,227,000
1,592,552,227,000
NONE
null
null
null
I get this error when i run `nlp.load_dataset('scientific_papers', 'pubmed', split = 'train[:50%]')`. The error is: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-2-7742dea167d0> in <module...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/275/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/274/comments
https://api.github.com/repos/huggingface/datasets/issues/274/events
https://github.com/huggingface/datasets/issues/274
639,156,625
MDU6SXNzdWU2MzkxNTY2MjU=
274
PG-19
{ "login": "lucidrains", "id": 108653, "node_id": "MDQ6VXNlcjEwODY1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/108653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucidrains", "html_url": "https://github.com/lucidrains", "followers_url": "https://api.github.com/users/l...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Sounds good! Do you want to give it a try?", "Ok, I'll see if I can figure it out tomorrow!", "Got around to this today, and so far so good, I'm able to download and load pg19 locally. However, I think there may be an issue with the dummy data, and testing in general.\r\n\r\nThe problem lies in the fact that e...
1,592,254,946,000
1,594,049,702,000
1,594,049,702,000
CONTRIBUTOR
null
null
null
Hi, and thanks for all your open-sourced work, as always! I was wondering if you would be open to adding PG-19 to your collection of datasets. https://github.com/deepmind/pg19 It is often used for benchmarking long-range language modeling.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/274/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/273
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/273/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/273/comments
https://api.github.com/repos/huggingface/datasets/issues/273/events
https://github.com/huggingface/datasets/pull/273
638,968,054
MDExOlB1bGxSZXF1ZXN0NDM0NjM0MzU4
273
update cos_e to add cos_e v1.0
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
[]
1,592,237,002,000
1,592,295,954,000
1,592,295,952,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/273", "html_url": "https://github.com/huggingface/datasets/pull/273", "diff_url": "https://github.com/huggingface/datasets/pull/273.diff", "patch_url": "https://github.com/huggingface/datasets/pull/273.patch", "merged_at": 1592295952000 }
This PR updates the cos_e dataset to add v1.0 as requested here #163 @nazneenrajani
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/273/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/272
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/272/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/272/comments
https://api.github.com/repos/huggingface/datasets/issues/272/events
https://github.com/huggingface/datasets/pull/272
638,307,313
MDExOlB1bGxSZXF1ZXN0NDM0MTExOTQ3
272
asd
{ "login": "sn696", "id": 66900970, "node_id": "MDQ6VXNlcjY2OTAwOTcw", "avatar_url": "https://avatars.githubusercontent.com/u/66900970?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sn696", "html_url": "https://github.com/sn696", "followers_url": "https://api.github.com/users/sn696/follow...
[]
closed
false
null
[]
null
[]
1,592,122,838,000
1,592,126,201,000
1,592,126,201,000
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/272", "html_url": "https://github.com/huggingface/datasets/pull/272", "diff_url": "https://github.com/huggingface/datasets/pull/272.diff", "patch_url": "https://github.com/huggingface/datasets/pull/272.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/272/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/271/comments
https://api.github.com/repos/huggingface/datasets/issues/271/events
https://github.com/huggingface/datasets/pull/271
638,135,754
MDExOlB1bGxSZXF1ZXN0NDMzOTg3NDkw
271
Fix allociné dataset configuration
{ "login": "TheophileBlard", "id": 37028092, "node_id": "MDQ6VXNlcjM3MDI4MDky", "avatar_url": "https://avatars.githubusercontent.com/u/37028092?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TheophileBlard", "html_url": "https://github.com/TheophileBlard", "followers_url": "https://api.gi...
[]
closed
false
null
[]
null
[ "Actually when there is only one configuration, then you don't need to specify the configuration in `load_dataset`. You can run:\r\n```python\r\ndataset = load_dataset('allocine')\r\n```\r\nand it works.\r\n\r\nMaybe we should take that into account in the nlp viewer @srush ?", "@lhoestq Just to understand the ex...
1,592,043,130,000
1,592,466,081,000
1,592,466,080,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/271", "html_url": "https://github.com/huggingface/datasets/pull/271", "diff_url": "https://github.com/huggingface/datasets/pull/271.diff", "patch_url": "https://github.com/huggingface/datasets/pull/271.patch", "merged_at": null }
This is a patch for #244. According to the [live nlp viewer](url), the Allociné dataset must be loaded with : ```python dataset = load_dataset('allocine', 'allocine') ``` This is redundant, as there is only one "dataset configuration", and should only be: ```python dataset = load_dataset('allocine') ``` This ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/271/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/270
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/270/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/270/comments
https://api.github.com/repos/huggingface/datasets/issues/270/events
https://github.com/huggingface/datasets/issues/270
638,121,617
MDU6SXNzdWU2MzgxMjE2MTc=
270
c4 dataset is not viewable in nlpviewer demo
{ "login": "rajarsheem", "id": 6441313, "node_id": "MDQ6VXNlcjY0NDEzMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6441313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rajarsheem", "html_url": "https://github.com/rajarsheem", "followers_url": "https://api.github.com/users...
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "C4 is too large to be shown in the viewer" ]
1,592,036,776,000
1,603,812,929,000
1,603,812,913,000
NONE
null
null
null
I get the following error when I try to view the c4 dataset in [nlpviewer](https://huggingface.co/nlp/viewer/) ```python ModuleNotFoundError: No module named 'langdetect' Traceback: File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__d...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/270/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/269
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/269/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/269/comments
https://api.github.com/repos/huggingface/datasets/issues/269/events
https://github.com/huggingface/datasets/issues/269
638,106,774
MDU6SXNzdWU2MzgxMDY3NzQ=
269
Error in metric.compute: missing `original_instructions` argument
{ "login": "zphang", "id": 1668462, "node_id": "MDQ6VXNlcjE2Njg0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zphang", "html_url": "https://github.com/zphang", "followers_url": "https://api.github.com/users/zphang/foll...
[ { "id": 2067393914, "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug", "name": "metric bug", "color": "25b21e", "default": false, "description": "A bug in a metric script" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[]
1,592,029,614,000
1,592,466,104,000
1,592,466,104,000
NONE
null
null
null
I'm running into an error using metrics for computation in the latest master as well as version 0.2.1. Here is a minimal example: ```python import nlp rte_metric = nlp.load_metric('glue', name="rte") rte_metric.compute( [0, 0, 1, 1], [0, 1, 0, 1], ) ``` ``` 181 # Read the predictio...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/269/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/269/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/268/comments
https://api.github.com/repos/huggingface/datasets/issues/268/events
https://github.com/huggingface/datasets/pull/268
637,848,056
MDExOlB1bGxSZXF1ZXN0NDMzNzU5NzQ1
268
add Rotten Tomatoes Movie Review sentences sentiment dataset
{ "login": "jxmorris12", "id": 13238952, "node_id": "MDQ6VXNlcjEzMjM4OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jxmorris12", "html_url": "https://github.com/jxmorris12", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "@jplu @thomwolf @patrickvonplaten @lhoestq -- How do I request reviewers? Thanks." ]
1,591,977,239,000
1,592,466,384,000
1,592,466,383,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/268", "html_url": "https://github.com/huggingface/datasets/pull/268", "diff_url": "https://github.com/huggingface/datasets/pull/268.diff", "patch_url": "https://github.com/huggingface/datasets/pull/268.patch", "merged_at": 1592466383000 }
Sentence-level movie reviews v1.0 from here: http://www.cs.cornell.edu/people/pabo/movie-review-data/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/268/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/267
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/267/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/267/comments
https://api.github.com/repos/huggingface/datasets/issues/267/events
https://github.com/huggingface/datasets/issues/267
637,415,545
MDU6SXNzdWU2Mzc0MTU1NDU=
267
How can I load/find WMT en-romanian?
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/ss...
[]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "follo...
null
[ "I will take a look :-) " ]
1,591,924,177,000
1,592,555,059,000
1,592,555,059,000
CONTRIBUTOR
null
null
null
I believe it is from `wmt16` When I run ```python wmt = nlp.load_dataset('wmt16') ``` I get: ```python AssertionError: The dataset wmt16 with config cs-en requires manual data. Please follow the manual download instructions: Some of the wmt configs here, require a manual download. Please look into wm...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/267/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/266/comments
https://api.github.com/repos/huggingface/datasets/issues/266/events
https://github.com/huggingface/datasets/pull/266
637,156,392
MDExOlB1bGxSZXF1ZXN0NDMzMTk1NDgw
266
Add sort, shuffle, test_train_split and select methods
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomw...
[]
closed
false
null
[]
null
[ "Nice !\r\n\r\nAlso it looks like we can have a train_test_split method for free:\r\n```python\r\ntrain_indices, test_indices = train_test_split(range(len(dataset)))\r\ntrain = dataset.sort(indices=train_indices)\r\ntest = dataset.sort(indices=test_indices)\r\n```\r\n\r\nand a shuffling method for free:\r\n```pytho...
1,591,892,540,000
1,592,497,405,000
1,592,497,404,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/266", "html_url": "https://github.com/huggingface/datasets/pull/266", "diff_url": "https://github.com/huggingface/datasets/pull/266.diff", "patch_url": "https://github.com/huggingface/datasets/pull/266.patch", "merged_at": 1592497403000 }
Add a bunch of methods to reorder/split/select rows in a dataset: - `dataset.select(indices)`: Create a new dataset with rows selected following the list/array of indices (which can have a different size than the dataset and contain duplicated indices, the only constrain is that all the integers in the list must be sm...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/266/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/265
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/265/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/265/comments
https://api.github.com/repos/huggingface/datasets/issues/265/events
https://github.com/huggingface/datasets/pull/265
637,139,220
MDExOlB1bGxSZXF1ZXN0NDMzMTgxNDMz
265
Add pyarrow warning colab
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,591,891,071,000
1,596,392,076,000
1,591,949,656,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/265", "html_url": "https://github.com/huggingface/datasets/pull/265", "diff_url": "https://github.com/huggingface/datasets/pull/265.diff", "patch_url": "https://github.com/huggingface/datasets/pull/265.patch", "merged_at": 1591949656000 }
When a user installs `nlp` on google colab, then google colab doesn't update pyarrow, and the runtime needs to be restarted to use the updated version of pyarrow. This is an issue because `nlp` requires the updated version to work correctly. In this PR I added en error that is shown to the user in google colab if...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/265/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/264
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/264/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/264/comments
https://api.github.com/repos/huggingface/datasets/issues/264/events
https://github.com/huggingface/datasets/pull/264
637,106,170
MDExOlB1bGxSZXF1ZXN0NDMzMTU0ODQ4
264
Fix small issues creating dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,591,888,816,000
1,591,949,757,000
1,591,949,756,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/264", "html_url": "https://github.com/huggingface/datasets/pull/264", "diff_url": "https://github.com/huggingface/datasets/pull/264.diff", "patch_url": "https://github.com/huggingface/datasets/pull/264.patch", "merged_at": 1591949756000 }
Fix many small issues mentioned in #249: - don't force to install apache beam for commands - fix None cache dir when using `dl_manager.download_custom` - added new extras in `setup.py` named `dev` that contains tests and quality dependencies - mock dataset sizes when running tests with dummy data - add a note abou...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/264/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/264/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/263/comments
https://api.github.com/repos/huggingface/datasets/issues/263/events
https://github.com/huggingface/datasets/issues/263
637,028,015
MDU6SXNzdWU2MzcwMjgwMTU=
263
[Feature request] Support for external modality for language datasets
{ "login": "aleSuglia", "id": 1479733, "node_id": "MDQ6VXNlcjE0Nzk3MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aleSuglia", "html_url": "https://github.com/aleSuglia", "followers_url": "https://api.github.com/users/al...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6...
closed
false
null
[]
null
[ "Thanks a lot, @aleSuglia for the very detailed and introductive feature request.\r\nIt seems like we could build something pretty useful here indeed.\r\n\r\nOne of the questions here is that Arrow doesn't have built-in support for generic \"tensors\" in records but there might be ways to do that in a clean way. We...
1,591,882,938,000
1,644,499,595,000
1,644,499,595,000
CONTRIBUTOR
null
null
null
# Background In recent years many researchers have advocated that learning meanings from text-based only datasets is just like asking a human to "learn to speak by listening to the radio" [[E. Bender and A. Koller,2020](https://openreview.net/forum?id=GKTvAcb12b), [Y. Bisk et. al, 2020](https://arxiv.org/abs/2004.10...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/263/reactions", "total_count": 23, "+1": 18, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 4 }
https://api.github.com/repos/huggingface/datasets/issues/263/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/262
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/262/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/262/comments
https://api.github.com/repos/huggingface/datasets/issues/262/events
https://github.com/huggingface/datasets/pull/262
636,702,849
MDExOlB1bGxSZXF1ZXN0NDMyODI3Mzcz
262
Add new dataset ANLI Round 1
{ "login": "easonnie", "id": 11016329, "node_id": "MDQ6VXNlcjExMDE2MzI5", "avatar_url": "https://avatars.githubusercontent.com/u/11016329?v=4", "gravatar_id": "", "url": "https://api.github.com/users/easonnie", "html_url": "https://github.com/easonnie", "followers_url": "https://api.github.com/users/eas...
[]
closed
false
null
[]
null
[ "Hello ! Thanks for adding this one :)\r\n\r\nThis looks great, you just have to do the last steps to make the CI pass.\r\nI can see that two things are missing:\r\n1. the dummy data that is used to test that the script is working as expected\r\n2. the json file with all the infos about the dataset\r\n\r\nYou can s...
1,591,848,897,000
1,591,999,383,000
1,591,999,383,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/262", "html_url": "https://github.com/huggingface/datasets/pull/262", "diff_url": "https://github.com/huggingface/datasets/pull/262.diff", "patch_url": "https://github.com/huggingface/datasets/pull/262.patch", "merged_at": null }
Adding new dataset [ANLI](https://github.com/facebookresearch/anli/). I'm not familiar with how to add new dataset. Let me know if there is any issue. I only include round 1 data here. There will be round 2, round 3 and more in the future with potentially different format. I think it will be better to separate them.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/262/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/261
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/261/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/261/comments
https://api.github.com/repos/huggingface/datasets/issues/261/events
https://github.com/huggingface/datasets/issues/261
636,372,380
MDU6SXNzdWU2MzYzNzIzODA=
261
Downloading dataset error with pyarrow.lib.RecordBatch
{ "login": "cuent", "id": 5248968, "node_id": "MDQ6VXNlcjUyNDg5Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/5248968?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cuent", "html_url": "https://github.com/cuent", "followers_url": "https://api.github.com/users/cuent/follower...
[]
closed
false
null
[]
null
[ "When you install `nlp` for the first time on a Colab runtime, it updates the `pyarrow` library that was already on colab. This update shows this message on colab:\r\n```\r\nWARNING: The following packages were previously imported in this runtime:\r\n [pyarrow]\r\nYou must restart the runtime in order to use newly...
1,591,805,059,000
1,591,886,112,000
1,591,886,112,000
NONE
null
null
null
I am trying to download `sentiment140` and I have the following error ``` /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/261/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/260
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/260/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/260/comments
https://api.github.com/repos/huggingface/datasets/issues/260/events
https://github.com/huggingface/datasets/pull/260
636,261,118
MDExOlB1bGxSZXF1ZXN0NDMyNDY3NDM5
260
Consistency fixes
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-...
[]
closed
false
null
[]
null
[]
1,591,796,682,000
1,591,871,677,000
1,591,871,676,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/260", "html_url": "https://github.com/huggingface/datasets/pull/260", "diff_url": "https://github.com/huggingface/datasets/pull/260.diff", "patch_url": "https://github.com/huggingface/datasets/pull/260.patch", "merged_at": 1591871676000 }
A few bugs I've found while hacking
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/260/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/259
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/259/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/259/comments
https://api.github.com/repos/huggingface/datasets/issues/259/events
https://github.com/huggingface/datasets/issues/259
636,239,529
MDU6SXNzdWU2MzYyMzk1Mjk=
259
documentation missing how to split a dataset
{ "login": "fotisj", "id": 2873355, "node_id": "MDQ6VXNlcjI4NzMzNTU=", "avatar_url": "https://avatars.githubusercontent.com/u/2873355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fotisj", "html_url": "https://github.com/fotisj", "followers_url": "https://api.github.com/users/fotisj/foll...
[]
closed
false
null
[]
null
[ "this seems to work for my specific problem:\r\n\r\n`self.train_ds, self.test_ds, self.val_ds = map(_prepare_ds, ('train', 'test[:25%]+test[50%:75%]', 'test[75%:]'))`", "Currently you can indeed split a dataset using `ds_test = nlp.load_dataset('imdb, split='test[:5000]')` (works also with percentages).\r\n\r\nHo...
1,591,795,093,000
1,592,518,824,000
1,592,518,824,000
NONE
null
null
null
I am trying to understand how to split a dataset ( as arrow_dataset). I know I can do something like this to access a split which is already in the original dataset : `ds_test = nlp.load_dataset('imdb, split='test') ` But how can I split ds_test into a test and a validation set (without reading the data into m...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/259/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/259/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/258
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/258/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/258/comments
https://api.github.com/repos/huggingface/datasets/issues/258/events
https://github.com/huggingface/datasets/issues/258
635,859,525
MDU6SXNzdWU2MzU4NTk1MjU=
258
Why is dataset after tokenization far more larger than the orginal one ?
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
[ "Hi ! This is because `.map` added the new column `input_ids` to the dataset, and so all the other columns were kept. Therefore the dataset size increased a lot.\r\n If you want to only keep the `input_ids` column, you can stash the other ones by specifying `remove_columns=[\"title\", \"text\"]` in the arguments of...
1,591,752,427,000
1,591,793,194,000
1,591,793,194,000
CONTRIBUTOR
null
null
null
I tokenize wiki dataset by `map` and cache the results. ``` def tokenize_tfm(example): example['input_ids'] = hf_fast_tokenizer.convert_tokens_to_ids(hf_fast_tokenizer.tokenize(example['text'])) return example wiki = nlp.load_dataset('wikipedia', '20200501.en', cache_dir=cache_dir)['train'] wiki.map(token...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/258/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/257
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/257/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/257/comments
https://api.github.com/repos/huggingface/datasets/issues/257/events
https://github.com/huggingface/datasets/issues/257
635,620,979
MDU6SXNzdWU2MzU2MjA5Nzk=
257
Tokenizer pickling issue fix not landed in `nlp` yet?
{ "login": "sarahwie", "id": 8027676, "node_id": "MDQ6VXNlcjgwMjc2NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarahwie", "html_url": "https://github.com/sarahwie", "followers_url": "https://api.github.com/users/sarah...
[]
closed
false
null
[]
null
[ "Yes, the new release of tokenizers solves this and should be out soon.\r\nIn the meantime, you can install it with `pip install tokenizers==0.8.0-dev2`", "If others run into this issue, a quick fix is to use python 3.6 instead of 3.7+. Serialization differences between the 3rd party `dataclasses` package for 3.6...
1,591,722,754,000
1,591,825,532,000
1,591,723,613,000
NONE
null
null
null
Unless I recreate an arrow_dataset from my loaded nlp dataset myself (which I think does not use the cache by default), I get the following error when applying the map function: ``` dataset = nlp.load_dataset('cos_e') tokenizer = GPT2TokenizerFast.from_pretrained('gpt2', cache_dir=cache_dir) for split in datase...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/257/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/257/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/256
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/256/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/256/comments
https://api.github.com/repos/huggingface/datasets/issues/256/events
https://github.com/huggingface/datasets/issues/256
635,596,295
MDU6SXNzdWU2MzU1OTYyOTU=
256
[Feature request] Add a feature to dataset
{ "login": "sarahwie", "id": 8027676, "node_id": "MDQ6VXNlcjgwMjc2NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarahwie", "html_url": "https://github.com/sarahwie", "followers_url": "https://api.github.com/users/sarah...
[]
closed
false
null
[]
null
[ "Do you have an example of what you would like to do? (you can just add a field in the output of the unction you give to map and this will add this field in the output table)", "Given another source of data loaded in, I want to pre-add it to the dataset so that it aligns with the indices of the arrow dataset prio...
1,591,720,692,000
1,591,721,502,000
1,591,721,502,000
NONE
null
null
null
Is there a straightforward way to add a field to the arrow_dataset, prior to performing map?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/256/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/256/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/255
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/255/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/255/comments
https://api.github.com/repos/huggingface/datasets/issues/255/events
https://github.com/huggingface/datasets/pull/255
635,300,822
MDExOlB1bGxSZXF1ZXN0NDMxNjg3MDM0
255
Add dataset/piaf
{ "login": "RachelKer", "id": 36986299, "node_id": "MDQ6VXNlcjM2OTg2Mjk5", "avatar_url": "https://avatars.githubusercontent.com/u/36986299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RachelKer", "html_url": "https://github.com/RachelKer", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[ "Very nice !" ]
1,591,697,761,000
1,591,950,687,000
1,591,950,687,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/255", "html_url": "https://github.com/huggingface/datasets/pull/255", "diff_url": "https://github.com/huggingface/datasets/pull/255.diff", "patch_url": "https://github.com/huggingface/datasets/pull/255.patch", "merged_at": 1591950687000 }
Small SQuAD-like French QA dataset [PIAF](https://www.aclweb.org/anthology/2020.lrec-1.673.pdf)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/255/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/254
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/254/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/254/comments
https://api.github.com/repos/huggingface/datasets/issues/254/events
https://github.com/huggingface/datasets/issues/254
635,057,568
MDU6SXNzdWU2MzUwNTc1Njg=
254
[Feature request] Be able to remove a specific sample of the dataset
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/ast...
[]
closed
false
null
[]
null
[ "Oh yes you can now do that with the `dataset.filter()` method that was added in #214 " ]
1,591,669,333,000
1,591,692,098,000
1,591,692,098,000
NONE
null
null
null
As mentioned in #117, it's currently not possible to remove a sample of the dataset. But it is a important use case : After applying some preprocessing, some samples might be empty for example. We should be able to remove these samples from the dataset, or at least mark them as `removed` so when iterating the datase...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/254/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/253
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/253/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/253/comments
https://api.github.com/repos/huggingface/datasets/issues/253/events
https://github.com/huggingface/datasets/pull/253
634,791,939
MDExOlB1bGxSZXF1ZXN0NDMxMjgwOTYz
253
add flue dataset
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
[ "The dummy data file was wrong. I only fixed it for the book config. Even though the tests are all green here, this should also be fixed for all other configs. Could you take a look there @mariamabarham ? ", "Hi @mariamabarham \r\n\r\nFLUE can indeed become a very interesting benchmark for french NLP !\r\nUnfortu...
1,591,636,269,000
1,594,885,859,000
1,594,885,859,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/253", "html_url": "https://github.com/huggingface/datasets/pull/253", "diff_url": "https://github.com/huggingface/datasets/pull/253.diff", "patch_url": "https://github.com/huggingface/datasets/pull/253.patch", "merged_at": null }
This PR add the Flue dataset as requested in this issue #223 . @lbourdois made a detailed description in that issue.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/253/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/253/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/252
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/252/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/252/comments
https://api.github.com/repos/huggingface/datasets/issues/252/events
https://github.com/huggingface/datasets/issues/252
634,563,239
MDU6SXNzdWU2MzQ1NjMyMzk=
252
NonMatchingSplitsSizesError error when reading the IMDB dataset
{ "login": "antmarakis", "id": 17463361, "node_id": "MDQ6VXNlcjE3NDYzMzYx", "avatar_url": "https://avatars.githubusercontent.com/u/17463361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/antmarakis", "html_url": "https://github.com/antmarakis", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "I just tried on my side and I didn't encounter your problem.\r\nApparently the script doesn't generate all the examples on your side.\r\n\r\nCan you provide the version of `nlp` you're using ?\r\nCan you try to clear your cache and re-run the code ?", "I updated it, that was it, thanks!", "Hello, I am facing t...
1,591,619,184,000
1,630,077,658,000
1,591,624,886,000
NONE
null
null
null
Hi! I am trying to load the `imdb` dataset with this line: `dataset = nlp.load_dataset('imdb', data_dir='/A/PATH', cache_dir='/A/PATH')` but I am getting the following error: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/mounts/Users/cisintern/antmarakis/anaconda3/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/252/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/251
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/251/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/251/comments
https://api.github.com/repos/huggingface/datasets/issues/251/events
https://github.com/huggingface/datasets/pull/251
634,544,977
MDExOlB1bGxSZXF1ZXN0NDMxMDgwMDkw
251
Better access to all dataset information
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomw...
[]
closed
false
null
[]
null
[]
1,591,617,410,000
1,591,949,580,000
1,591,949,578,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/251", "html_url": "https://github.com/huggingface/datasets/pull/251", "diff_url": "https://github.com/huggingface/datasets/pull/251.diff", "patch_url": "https://github.com/huggingface/datasets/pull/251.patch", "merged_at": 1591949578000 }
Moves all the dataset info down one level from `dataset.info.XXX` to `dataset.XXX` This way it's easier to access `dataset.feature['label']` for instance Also, add the original split instructions used to create the dataset in `dataset.split` Ex: ``` from nlp import load_dataset stsb = load_dataset('glue', name=...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/251/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/250
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/250/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/250/comments
https://api.github.com/repos/huggingface/datasets/issues/250/events
https://github.com/huggingface/datasets/pull/250
634,416,751
MDExOlB1bGxSZXF1ZXN0NDMwOTcyMzg4
250
Remove checksum download in c4
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "Commenting again in case [previous thread](https://github.com/huggingface/nlp/pull/233) was inactive.\r\n\r\n@lhoestq I am facing `IsADirectoryError` while downloading with this command.\r\nCan you pls look into it & help me.\r\nI'm using version 0.4.0 of `nlp`.\r\n\r\n```\r\ndataset = load_dataset(\"c4\", 'en', d...
1,591,607,580,000
1,598,339,096,000
1,591,607,819,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/250", "html_url": "https://github.com/huggingface/datasets/pull/250", "diff_url": "https://github.com/huggingface/datasets/pull/250.diff", "patch_url": "https://github.com/huggingface/datasets/pull/250.patch", "merged_at": 1591607819000 }
There was a line from the original tfds script that was still there and causing issues when loading the c4 script. This one should fix #233 and allow anyone to load the c4 script to generate the dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/250/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/250/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/249
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/249/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/249/comments
https://api.github.com/repos/huggingface/datasets/issues/249/events
https://github.com/huggingface/datasets/issues/249
633,393,443
MDU6SXNzdWU2MzMzOTM0NDM=
249
[Dataset created] some critical small issues when I was creating a dataset
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.c...
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "Thanks for noticing all these :) They should be easy to fix indeed", "Alright I think I fixed all the problems you mentioned. Thanks again, that will be useful for many people.\r\nThere is still more work needed for point 7. but we plan to have some nice docs soon." ]
1,591,534,734,000
1,591,950,531,000
1,591,950,531,000
CONTRIBUTOR
null
null
null
Hi, I successfully created a dataset and has made a pr #248. But I have encountered several problems when I was creating it, and those should be easy to fix. 1. Not found dataset_info.json should be fixed by #241 , eager to wait it be merged. 2. Forced to install `apach_beam` If we should install it, then it m...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/249/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/249/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/248
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/248/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/248/comments
https://api.github.com/repos/huggingface/datasets/issues/248/events
https://github.com/huggingface/datasets/pull/248
633,390,427
MDExOlB1bGxSZXF1ZXN0NDMwMDQ0MzU0
248
add Toronto BooksCorpus
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
[ "Thanks for adding this one !\r\n\r\nAbout the three points you mentioned:\r\n1. I think the `toronto_books_corpus` branch can be removed @mariamabarham ? \r\n2. You can use the download manager to download from google drive. For you case you can just do something like \r\n```python\r\nURL = \"https://drive.google....
1,591,534,496,000
1,591,951,503,000
1,591,951,502,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/248", "html_url": "https://github.com/huggingface/datasets/pull/248", "diff_url": "https://github.com/huggingface/datasets/pull/248.diff", "patch_url": "https://github.com/huggingface/datasets/pull/248.patch", "merged_at": 1591951502000 }
1. I knew there is a branch `toronto_books_corpus` - After I downloaded it, I found it is all non-english, and only have one row. - It seems that it cites the wrong paper - according to papar using it, it is called `BooksCorpus` but not `TornotoBooksCorpus` 2. It use a text mirror in google drive - `bookscorpu...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/248/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/248/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/247
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/247/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/247/comments
https://api.github.com/repos/huggingface/datasets/issues/247/events
https://github.com/huggingface/datasets/pull/247
632,380,078
MDExOlB1bGxSZXF1ZXN0NDI5MTMwMzQ2
247
Make all dataset downloads deterministic by applying `sorted` to glob and os.listdir
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
null
[]
null
[ "That's great!\r\n\r\nI think it would be nice to test \"deterministic-ness\" of datasets in CI if we can do it (should be left for future PR of course)\r\n\r\nHere is a possibility (maybe there are other ways to do it): given that we may soon have efficient and large-scale hashing (cf our discussion on versioning/...
1,591,441,330,000
1,591,607,896,000
1,591,607,894,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/247", "html_url": "https://github.com/huggingface/datasets/pull/247", "diff_url": "https://github.com/huggingface/datasets/pull/247.diff", "patch_url": "https://github.com/huggingface/datasets/pull/247.patch", "merged_at": 1591607894000 }
This PR makes all datasets loading deterministic by applying `sorted()` to all `glob.glob` and `os.listdir` statements. Are there other "non-deterministic" functions apart from `glob.glob()` and `os.listdir()` that you can think of @thomwolf @lhoestq @mariamabarham @jplu ? **Important** It does break backward c...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/247/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/246
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/246/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/246/comments
https://api.github.com/repos/huggingface/datasets/issues/246/events
https://github.com/huggingface/datasets/issues/246
632,380,054
MDU6SXNzdWU2MzIzODAwNTQ=
246
What is the best way to cache a dataset?
{ "login": "Mistobaan", "id": 112599, "node_id": "MDQ6VXNlcjExMjU5OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/112599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mistobaan", "html_url": "https://github.com/Mistobaan", "followers_url": "https://api.github.com/users/Mist...
[]
closed
false
null
[]
null
[ "Everything is already cached by default in 🤗nlp (in particular dataset\nloading and all the “map()” operations) so I don’t think you need to do any\nspecific caching in streamlit.\n\nTell us if you feel like it’s not the case.\n\nOn Sat, 6 Jun 2020 at 13:02, Fabrizio Milo <notifications@github.com> wrote:\n\n> Fo...
1,591,441,327,000
1,594,286,107,000
1,594,286,107,000
NONE
null
null
null
For example if I want to use streamlit with a nlp dataset: ``` @st.cache def load_data(): return nlp.load_dataset('squad') ``` This code raises the error "uncachable object" Right now I just fixed with a constant for my specific case: ``` @st.cache(hash_funcs={pyarrow.lib.Buffer: lambda b: 0}) ```...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/246/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/245
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/245/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/245/comments
https://api.github.com/repos/huggingface/datasets/issues/245/events
https://github.com/huggingface/datasets/issues/245
631,985,108
MDU6SXNzdWU2MzE5ODUxMDg=
245
SST-2 test labels are all -1
{ "login": "jxmorris12", "id": 13238952, "node_id": "MDQ6VXNlcjEzMjM4OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jxmorris12", "html_url": "https://github.com/jxmorris12", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "this also happened to me with `nlp.load_dataset('glue', 'mnli')`", "Yes, this is because the test sets for glue are hidden so the labels are\nnot publicly available. You can read the glue paper for more details.\n\nOn Sat, 6 Jun 2020 at 18:16, Jack Morris <notifications@github.com> wrote:\n\n> this also happened...
1,591,393,302,000
1,638,924,452,000
1,591,462,601,000
CONTRIBUTOR
null
null
null
I'm trying to test a model on the SST-2 task, but all the labels I see in the test set are -1. ``` >>> import nlp >>> glue = nlp.load_dataset('glue', 'sst2') >>> glue {'train': Dataset(schema: {'sentence': 'string', 'label': 'int64', 'idx': 'int32'}, num_rows: 67349), 'validation': Dataset(schema: {'sentence': 'st...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/245/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/245/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/244
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/244/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/244/comments
https://api.github.com/repos/huggingface/datasets/issues/244/events
https://github.com/huggingface/datasets/pull/244
631,869,155
MDExOlB1bGxSZXF1ZXN0NDI4NjgxMTcx
244
Add Allociné Dataset
{ "login": "TheophileBlard", "id": 37028092, "node_id": "MDQ6VXNlcjM3MDI4MDky", "avatar_url": "https://avatars.githubusercontent.com/u/37028092?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TheophileBlard", "html_url": "https://github.com/TheophileBlard", "followers_url": "https://api.gi...
[]
closed
false
null
[]
null
[ "great work @TheophileBlard ", "LGTM, thanks a lot for adding dummy data tests :-) Was it difficult to create the correct dummy data folder? ", "It was pretty easy actually. Documentation is on point !" ]
1,591,384,766,000
1,591,861,646,000
1,591,861,646,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/244", "html_url": "https://github.com/huggingface/datasets/pull/244", "diff_url": "https://github.com/huggingface/datasets/pull/244.diff", "patch_url": "https://github.com/huggingface/datasets/pull/244.patch", "merged_at": 1591861646000 }
This is a french binary sentiment classification dataset, which was used to train this model: https://huggingface.co/tblard/tf-allocine. Basically, it's a french "IMDB" dataset, with more reviews. More info on [this repo](https://github.com/TheophileBlard/french-sentiment-analysis-with-bert).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/244/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/244/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/243
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/243/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/243/comments
https://api.github.com/repos/huggingface/datasets/issues/243/events
https://github.com/huggingface/datasets/pull/243
631,735,848
MDExOlB1bGxSZXF1ZXN0NDI4NTY2MTEy
243
Specify utf-8 encoding for GLUE
{ "login": "patpizio", "id": 15801338, "node_id": "MDQ6VXNlcjE1ODAxMzM4", "avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patpizio", "html_url": "https://github.com/patpizio", "followers_url": "https://api.github.com/users/pat...
[]
closed
false
null
[]
null
[ "Thanks for fixing the encoding :)" ]
1,591,374,780,000
1,592,428,566,000
1,591,605,721,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/243", "html_url": "https://github.com/huggingface/datasets/pull/243", "diff_url": "https://github.com/huggingface/datasets/pull/243.diff", "patch_url": "https://github.com/huggingface/datasets/pull/243.patch", "merged_at": 1591605721000 }
#242 This makes the GLUE-MNLI dataset readable on my machine, not sure if it's a Windows-only bug.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/243/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/242
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/242/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/242/comments
https://api.github.com/repos/huggingface/datasets/issues/242/events
https://github.com/huggingface/datasets/issues/242
631,733,683
MDU6SXNzdWU2MzE3MzM2ODM=
242
UnicodeDecodeError when downloading GLUE-MNLI
{ "login": "patpizio", "id": 15801338, "node_id": "MDQ6VXNlcjE1ODAxMzM4", "avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patpizio", "html_url": "https://github.com/patpizio", "followers_url": "https://api.github.com/users/pat...
[]
closed
false
null
[]
null
[ "It should be good now, thanks for noticing and fixing it ! I would say that it was because you are on windows but not 100% sure", "On Windows Python supports Unicode almost everywhere, but one of the notable exceptions is open() where it uses the locale encoding schema. So platform independent python scripts wou...
1,591,374,601,000
1,591,718,807,000
1,591,605,903,000
CONTRIBUTOR
null
null
null
When I run ```python dataset = nlp.load_dataset('glue', 'mnli') ``` I get an encoding error (could it be because I'm using Windows?) : ```python # Lots of error log lines later... ~\Miniconda3\envs\nlp\lib\site-packages\tqdm\std.py in __iter__(self) 1128 try: -> 1129 for obj in iterable:...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/242/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/241
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/241/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/241/comments
https://api.github.com/repos/huggingface/datasets/issues/241/events
https://github.com/huggingface/datasets/pull/241
631,703,079
MDExOlB1bGxSZXF1ZXN0NDI4NTQwMDM0
241
Fix empty cache dir
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "Looks great! Will this change force all cached datasets to be redownloaded? But even if it does, it shoud not be a big problem, I think", "> Looks great! Will this change force all cached datasets to be redownloaded? But even if it does, it shoud not be a big problem, I think\r\n\r\nNo it shouldn't force to redo...
1,591,371,922,000
1,591,605,333,000
1,591,605,331,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/241", "html_url": "https://github.com/huggingface/datasets/pull/241", "diff_url": "https://github.com/huggingface/datasets/pull/241.diff", "patch_url": "https://github.com/huggingface/datasets/pull/241.patch", "merged_at": 1591605331000 }
If the cache dir of a dataset is empty, the dataset fails to load and throws a FileNotFounfError. We could end up with empty cache dir because there was a line in the code that created the cache dir without using a temp dir. Using a temp dir is useful as it gets renamed to the real cache dir only if the full process is...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/241/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/240
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/240/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/240/comments
https://api.github.com/repos/huggingface/datasets/issues/240/events
https://github.com/huggingface/datasets/issues/240
631,434,677
MDU6SXNzdWU2MzE0MzQ2Nzc=
240
Deterministic dataset loading
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
null
[]
null
[ "Yes good point !", "I think using `sorted(glob.glob())` would actually solve this problem. Can you think of other reasons why dataset loading might not be deterministic? @mariamabarham @yjernite @lhoestq @thomwolf . \r\n\r\nI can do a sweep through the dataset scripts and fix the glob.glob() if you guys are ok w...
1,591,347,806,000
1,591,607,894,000
1,591,607,894,000
MEMBER
null
null
null
When calling: ```python import nlp dataset = nlp.load_dataset("trivia_qa", split="validation[:1%]") ``` the resulting dataset is not deterministic over different google colabs. After talking to @thomwolf, I suspect the reason to be the use of `glob.glob` in line: https://github.com/huggingface/nlp/blob/2e0...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/240/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/240/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/239
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/239/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/239/comments
https://api.github.com/repos/huggingface/datasets/issues/239/events
https://github.com/huggingface/datasets/issues/239
631,340,440
MDU6SXNzdWU2MzEzNDA0NDA=
239
[Creating new dataset] Not found dataset_info.json
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.c...
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "I think you can just `rm` this directory and it should be good :)", "@lhoestq - this seems to happen quite often (already the 2nd issue). Can we maybe delete this automatically?", "Yes I have an idea of what's going on. I'm sure I can fix that", "Hi, I rebase my local copy to `fix-empty-cache-dir`, and try t...
1,591,337,704,000
1,591,534,864,000
1,591,534,864,000
CONTRIBUTOR
null
null
null
Hi, I am trying to create Toronto Book Corpus. #131 I ran `~/nlp % python nlp-cli test datasets/bookcorpus --save_infos --all_configs` but this doesn't create `dataset_info.json` and try to use it ``` INFO:nlp.load:Checking datasets/bookcorpus/bookcorpus.py for additional imports. INFO:filelock:Lock 1397953257...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/239/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/238
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/238/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/238/comments
https://api.github.com/repos/huggingface/datasets/issues/238/events
https://github.com/huggingface/datasets/issues/238
631,260,143
MDU6SXNzdWU2MzEyNjAxNDM=
238
[Metric] Bertscore : Warning : Empty candidate sentence; Setting recall to be 0.
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/ast...
[ { "id": 2067393914, "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug", "name": "metric bug", "color": "25b21e", "default": false, "description": "A bug in a metric script" } ]
closed
false
null
[]
null
[ "This print statement comes from the official implementation of bert_score (see [here](https://github.com/Tiiiger/bert_score/blob/master/bert_score/utils.py#L343)). The warning shows up only if the attention mask outputs no candidate.\r\nRight now we want to only use official code for metrics to have fair evaluatio...
1,591,323,287,000
1,593,450,619,000
1,593,450,619,000
NONE
null
null
null
When running BERT-Score, I'm meeting this warning : > Warning: Empty candidate sentence; Setting recall to be 0. Code : ``` import nlp metric = nlp.load_metric("bertscore") scores = metric.compute(["swag", "swags"], ["swags", "totally something different"], lang="en", device=0) ``` --- **What am I do...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/238/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/238/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/237
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/237/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/237/comments
https://api.github.com/repos/huggingface/datasets/issues/237/events
https://github.com/huggingface/datasets/issues/237
631,199,940
MDU6SXNzdWU2MzExOTk5NDA=
237
Can't download MultiNLI
{ "login": "patpizio", "id": 15801338, "node_id": "MDQ6VXNlcjE1ODAxMzM4", "avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patpizio", "html_url": "https://github.com/patpizio", "followers_url": "https://api.github.com/users/pat...
[]
closed
false
null
[]
null
[ "You should use `load_dataset('glue', 'mnli')`", "Thanks! I thought I had to use the same code displayed in the live viewer:\r\n```python\r\n!pip install nlp\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('multi_nli', 'plain_text')\r\n```\r\nYour suggestion works, even if then I got a different issue (...
1,591,311,921,000
1,591,440,694,000
1,591,440,694,000
CONTRIBUTOR
null
null
null
When I try to download MultiNLI with ```python dataset = load_dataset('multi_nli') ``` I get this long error: ```python --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-13-3b11f6be4cb9> in <m...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/237/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/236
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/236/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/236/comments
https://api.github.com/repos/huggingface/datasets/issues/236/events
https://github.com/huggingface/datasets/pull/236
631,099,875
MDExOlB1bGxSZXF1ZXN0NDI4MDUwNzI4
236
CompGuessWhat?! dataset
{ "login": "aleSuglia", "id": 1479733, "node_id": "MDQ6VXNlcjE0Nzk3MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aleSuglia", "html_url": "https://github.com/aleSuglia", "followers_url": "https://api.github.com/users/al...
[]
closed
false
null
[]
null
[ "Hi @aleSuglia, thanks for this great PR. Indeed you can have both datasets in one file. You need to add a config class which will allows you to specify the different subdataset names and then you will be able to load them as follow.\r\nnlp.load_dataset(\"compguesswhat\", \"compguesswhat-gameplay\") \r\nnlp.load_d...
1,591,299,950,000
1,591,868,622,000
1,591,861,521,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/236", "html_url": "https://github.com/huggingface/datasets/pull/236", "diff_url": "https://github.com/huggingface/datasets/pull/236.diff", "patch_url": "https://github.com/huggingface/datasets/pull/236.patch", "merged_at": 1591861521000 }
Hello, Thanks for the amazing library that you put together. I'm Alessandro Suglia, the first author of CompGuessWhat?!, a recently released dataset for grounded language learning accepted to ACL 2020 ([https://compguesswhat.github.io](https://compguesswhat.github.io)). This pull-request adds the CompGuessWhat?! ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/236/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/235
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/235/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/235/comments
https://api.github.com/repos/huggingface/datasets/issues/235/events
https://github.com/huggingface/datasets/pull/235
630,952,297
MDExOlB1bGxSZXF1ZXN0NDI3OTM1MjQ0
235
Add experimental datasets
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[]
closed
false
null
[]
null
[ "I think it would be nicer to not create a new folder `datasets_experimental` , but just put your datasets also into the folder `datasets` for the following reasons:\r\n\r\n- From my point of view, the datasets are not very different from the other datasets (assuming that we soon have C4, and the beam datasets) so ...
1,591,286,096,000
1,591,976,335,000
1,591,976,335,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/235", "html_url": "https://github.com/huggingface/datasets/pull/235", "diff_url": "https://github.com/huggingface/datasets/pull/235.diff", "patch_url": "https://github.com/huggingface/datasets/pull/235.patch", "merged_at": 1591976335000 }
## Adding an *experimental datasets* folder After using the 🤗nlp library for some time, I find that while it makes it super easy to create new memory-mapped datasets with lots of cool utilities, a lot of what I want to do doesn't work well with the current `MockDownloader` based testing paradigm, making it hard to ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/235/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/235/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/234
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/234/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/234/comments
https://api.github.com/repos/huggingface/datasets/issues/234/events
https://github.com/huggingface/datasets/issues/234
630,534,427
MDU6SXNzdWU2MzA1MzQ0Mjc=
234
Huggingface NLP, Uploading custom dataset
{ "login": "Nouman97", "id": 42269506, "node_id": "MDQ6VXNlcjQyMjY5NTA2", "avatar_url": "https://avatars.githubusercontent.com/u/42269506?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nouman97", "html_url": "https://github.com/Nouman97", "followers_url": "https://api.github.com/users/Nou...
[]
closed
false
null
[]
null
[ "What do you mean 'custom' ? You may want to elaborate on it when ask a question.\r\n\r\nAnyway, there are two things you may interested\r\n`nlp.Dataset.from_file` and `load_dataset(..., cache_dir=)`", "To load a dataset you need to have a script that defines the format of the examples, the splits and the way to ...
1,591,250,346,000
1,594,028,006,000
1,594,028,006,000
NONE
null
null
null
Hello, Does anyone know how we can call our custom dataset using the nlp.load command? Let's say that I have a dataset based on the same format as that of squad-v1.1, how am I supposed to load it using huggingface nlp. Thank you!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/234/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/234/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/233
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/233/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/233/comments
https://api.github.com/repos/huggingface/datasets/issues/233/events
https://github.com/huggingface/datasets/issues/233
630,432,132
MDU6SXNzdWU2MzA0MzIxMzI=
233
Fail to download c4 english corpus
{ "login": "donggyukimc", "id": 16605764, "node_id": "MDQ6VXNlcjE2NjA1NzY0", "avatar_url": "https://avatars.githubusercontent.com/u/16605764?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donggyukimc", "html_url": "https://github.com/donggyukimc", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
[ "Hello ! Thanks for noticing this bug, let me fix that.\r\n\r\nAlso for information, as specified in the changelog of the latest release, C4 currently needs to have a runtime for apache beam to work on. Apache beam is used to process this very big dataset and it can work on dataflow, spark, flink, apex, etc. You ca...
1,591,232,798,000
1,610,090,252,000
1,591,607,819,000
NONE
null
null
null
i run following code to download c4 English corpus. ``` dataset = nlp.load_dataset('c4', 'en', beam_runner='DirectRunner' , data_dir='/mypath') ``` and i met failure as follows ``` Downloading and preparing dataset c4/en (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/adam/....
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/233/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/233/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/232
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/232/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/232/comments
https://api.github.com/repos/huggingface/datasets/issues/232/events
https://github.com/huggingface/datasets/pull/232
630,029,568
MDExOlB1bGxSZXF1ZXN0NDI3MjI5NDcy
232
Nlp cli fix endpoints
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "LGTM 👍 " ]
1,591,193,439,000
1,591,606,978,000
1,591,606,977,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/232", "html_url": "https://github.com/huggingface/datasets/pull/232", "diff_url": "https://github.com/huggingface/datasets/pull/232.diff", "patch_url": "https://github.com/huggingface/datasets/pull/232.patch", "merged_at": 1591606977000 }
With this PR users will be able to upload their own datasets and metrics. As mentioned in #181, I had to use the new endpoints and revert the use of dataclasses (just in case we have changes in the API in the future). We now distinguish commands for datasets and commands for metrics: ```bash nlp-cli upload_data...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/232/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/232/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/231
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/231/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/231/comments
https://api.github.com/repos/huggingface/datasets/issues/231/events
https://github.com/huggingface/datasets/pull/231
629,988,694
MDExOlB1bGxSZXF1ZXN0NDI3MTk3MTcz
231
Add .download to MockDownloadManager
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,591,190,400,000
1,591,194,356,000
1,591,194,355,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/231", "html_url": "https://github.com/huggingface/datasets/pull/231", "diff_url": "https://github.com/huggingface/datasets/pull/231.diff", "patch_url": "https://github.com/huggingface/datasets/pull/231.patch", "merged_at": 1591194354000 }
One method from the DownloadManager was missing and some users couldn't run the tests because of that. @yjernite
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/231/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/231/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/230
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/230/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/230/comments
https://api.github.com/repos/huggingface/datasets/issues/230/events
https://github.com/huggingface/datasets/pull/230
629,983,684
MDExOlB1bGxSZXF1ZXN0NDI3MTkzMTQ0
230
Don't force to install apache beam for wikipedia dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,591,189,987,000
1,591,194,849,000
1,591,194,847,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/230", "html_url": "https://github.com/huggingface/datasets/pull/230", "diff_url": "https://github.com/huggingface/datasets/pull/230.diff", "patch_url": "https://github.com/huggingface/datasets/pull/230.patch", "merged_at": 1591194847000 }
As pointed out in #227, we shouldn't force users to install apache beam if the processed dataset can be downloaded. I moved the imports of some datasets to avoid this problem
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/230/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/230/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/229
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/229/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/229/comments
https://api.github.com/repos/huggingface/datasets/issues/229/events
https://github.com/huggingface/datasets/pull/229
629,956,490
MDExOlB1bGxSZXF1ZXN0NDI3MTcxMzc5
229
Rename dataset_infos.json to dataset_info.json
{ "login": "aswin-giridhar", "id": 11817160, "node_id": "MDQ6VXNlcjExODE3MTYw", "avatar_url": "https://avatars.githubusercontent.com/u/11817160?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aswin-giridhar", "html_url": "https://github.com/aswin-giridhar", "followers_url": "https://api.gi...
[]
closed
false
null
[]
null
[ "\r\nThis was actually the right name. `dataset_infos.json` is used to have the infos of all the dataset configurations.\r\n\r\nOn the other hand `dataset_info.json` (without 's') is a cache file with the info of one specific configuration.\r\n\r\nTo fix #228, we probably just have to clear and reload the nlp-viewe...
1,591,187,504,000
1,591,188,774,000
1,591,188,513,000
NONE
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/229", "html_url": "https://github.com/huggingface/datasets/pull/229", "diff_url": "https://github.com/huggingface/datasets/pull/229.diff", "patch_url": "https://github.com/huggingface/datasets/pull/229.patch", "merged_at": null }
As the file required for the viewing in the live nlp viewer is named as dataset_info.json
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/229/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/229/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/228
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/228/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/228/comments
https://api.github.com/repos/huggingface/datasets/issues/228/events
https://github.com/huggingface/datasets/issues/228
629,952,402
MDU6SXNzdWU2Mjk5NTI0MDI=
228
Not able to access the XNLI dataset
{ "login": "aswin-giridhar", "id": 11817160, "node_id": "MDQ6VXNlcjExODE3MTYw", "avatar_url": "https://avatars.githubusercontent.com/u/11817160?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aswin-giridhar", "html_url": "https://github.com/aswin-giridhar", "followers_url": "https://api.gi...
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
{ "login": "srush", "id": 35882, "node_id": "MDQ6VXNlcjM1ODgy", "avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srush", "html_url": "https://github.com/srush", "followers_url": "https://api.github.com/users/srush/followers", "f...
[ { "login": "srush", "id": 35882, "node_id": "MDQ6VXNlcjM1ODgy", "avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srush", "html_url": "https://github.com/srush", "followers_url": "https://api.github.com/users/sr...
null
[ "Added pull request to change the name of the file from dataset_infos.json to dataset_info.json", "Thanks for reporting this bug !\r\nAs it seems to be just a cache problem, I closed your PR.\r\nI think we might just need to clear and reload the `xnli` cache @srush ? ", "Update: The dataset_info.json error is g...
1,591,187,114,000
1,595,007,862,000
1,595,007,862,000
NONE
null
null
null
When I try to access the XNLI dataset, I get the following error. The option of plain_text get selected automatically and then I get the following error. ``` FileNotFoundError: [Errno 2] No such file or directory: '/home/sasha/.cache/huggingface/datasets/xnli/plain_text/1.0.0/dataset_info.json' Traceback: File "/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/228/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/228/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/227
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/227/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/227/comments
https://api.github.com/repos/huggingface/datasets/issues/227/events
https://github.com/huggingface/datasets/issues/227
629,845,704
MDU6SXNzdWU2Mjk4NDU3MDQ=
227
Should we still have to force to install apache_beam to download wikipedia ?
{ "login": "richarddwang", "id": 17963619, "node_id": "MDQ6VXNlcjE3OTYzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richarddwang", "html_url": "https://github.com/richarddwang", "followers_url": "https://api.github.c...
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "Thanks for your message 😊 \r\nIndeed users shouldn't have to install those dependencies", "Got it, feel free to close this issue when you think it’s resolved.", "It should be good now :)" ]
1,591,176,800,000
1,591,197,941,000
1,591,197,941,000
CONTRIBUTOR
null
null
null
Hi, first thanks to @lhoestq 's revolutionary work, I successfully downloaded processed wikipedia according to the doc. 😍😍😍 But at the first try, it tell me to install `apache_beam` and `mwparserfromhell`, which I thought wouldn't be used according to #204 , it was kind of confusing me at that time. Maybe we s...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/227/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/226
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/226/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/226/comments
https://api.github.com/repos/huggingface/datasets/issues/226/events
https://github.com/huggingface/datasets/pull/226
628,344,520
MDExOlB1bGxSZXF1ZXN0NDI1OTA0MjEz
226
add BlendedSkillTalk dataset
{ "login": "mariamabarham", "id": 38249783, "node_id": "MDQ6VXNlcjM4MjQ5Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariamabarham", "html_url": "https://github.com/mariamabarham", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
[ "Awesome :D" ]
1,591,008,885,000
1,591,195,043,000
1,591,195,042,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/226", "html_url": "https://github.com/huggingface/datasets/pull/226", "diff_url": "https://github.com/huggingface/datasets/pull/226.diff", "patch_url": "https://github.com/huggingface/datasets/pull/226.patch", "merged_at": 1591195042000 }
This PR add the BlendedSkillTalk dataset, which is used to fine tune the blenderbot.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/226/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/225
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/225/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/225/comments
https://api.github.com/repos/huggingface/datasets/issues/225/events
https://github.com/huggingface/datasets/issues/225
628,083,366
MDU6SXNzdWU2MjgwODMzNjY=
225
[ROUGE] Different scores with `files2rouge`
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/ast...
[ { "id": 2067400959, "node_id": "MDU6TGFiZWwyMDY3NDAwOTU5", "url": "https://api.github.com/repos/huggingface/datasets/labels/Metric%20discussion", "name": "Metric discussion", "color": "d722e8", "default": false, "description": "Discussions on the metrics" } ]
closed
false
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[ { "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api....
null
[ "@Colanim unfortunately there are different implementations of the ROUGE metric floating around online which yield different results, and we had to chose one for the package :) We ended up including the one from the google-research repository, which does minimal post-processing before computing the P/R/F scores. If...
1,590,972,636,000
1,591,198,038,000
1,591,198,038,000
NONE
null
null
null
It seems that the ROUGE score of `nlp` is lower than the one of `files2rouge`. Here is a self-contained notebook to reproduce both scores : https://colab.research.google.com/drive/14EyAXValB6UzKY9x4rs_T3pyL7alpw_F?usp=sharing --- `nlp` : (Only mid F-scores) >rouge1 0.33508031962733364 rouge2 0.145743337761...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/225/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/225/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/224
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/224/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/224/comments
https://api.github.com/repos/huggingface/datasets/issues/224/events
https://github.com/huggingface/datasets/issues/224
627,791,693
MDU6SXNzdWU2Mjc3OTE2OTM=
224
[Feature Request/Help] BLEURT model -> PyTorch
{ "login": "adamwlev", "id": 6889910, "node_id": "MDQ6VXNlcjY4ODk5MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6889910?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adamwlev", "html_url": "https://github.com/adamwlev", "followers_url": "https://api.github.com/users/adamw...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[ { "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api....
null
[ "Is there any update on this? \r\n\r\nThanks!", "Hitting this error when using bleurt with PyTorch ...\r\n\r\n```\r\nUnrecognizedFlagError: Unknown command line flag 'f'\r\n```\r\n... and I'm assuming because it was built for TF specifically. Is there a way to use this metric in PyTorch?", "We currently provid...
1,590,863,440,000
1,630,594,937,000
1,609,754,012,000
NONE
null
null
null
Hi, I am interested in porting google research's new BLEURT learned metric to PyTorch (because I wish to do something experimental with language generation and backpropping through BLEURT). I noticed that you guys don't have it yet so I am partly just asking if you plan to add it (@thomwolf said you want to do so on Tw...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/224/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/224/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/223
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/223/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/223/comments
https://api.github.com/repos/huggingface/datasets/issues/223/events
https://github.com/huggingface/datasets/issues/223
627,683,386
MDU6SXNzdWU2Mjc2ODMzODY=
223
[Feature request] Add FLUE dataset
{ "login": "lbourdois", "id": 58078086, "node_id": "MDQ6VXNlcjU4MDc4MDg2", "avatar_url": "https://avatars.githubusercontent.com/u/58078086?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lbourdois", "html_url": "https://github.com/lbourdois", "followers_url": "https://api.github.com/users/...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Hi @lbourdois, yes please share it with us", "@mariamabarham \r\nI put all the datasets on this drive: https://1drv.ms/u/s!Ao2Rcpiny7RFinDypq7w-LbXcsx9?e=iVsEDh\r\n\r\n\r\nSome information : \r\n• For FLUE, the quote used is\r\n\r\n> @misc{le2019flaubert,\r\n> title={FlauBERT: Unsupervised Language Model Pre...
1,590,828,735,000
1,607,002,773,000
1,607,002,773,000
NONE
null
null
null
Hi, I think it would be interesting to add the FLUE dataset for francophones or anyone wishing to work on French. In other requests, I read that you are already working on some datasets, and I was wondering if FLUE was planned. If it is not the case, I can provide each of the cleaned FLUE datasets (in the form...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/223/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/222
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/222/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/222/comments
https://api.github.com/repos/huggingface/datasets/issues/222/events
https://github.com/huggingface/datasets/issues/222
627,586,690
MDU6SXNzdWU2Mjc1ODY2OTA=
222
Colab Notebook breaks when downloading the squad dataset
{ "login": "carlos-aguayo", "id": 338917, "node_id": "MDQ6VXNlcjMzODkxNw==", "avatar_url": "https://avatars.githubusercontent.com/u/338917?v=4", "gravatar_id": "", "url": "https://api.github.com/users/carlos-aguayo", "html_url": "https://github.com/carlos-aguayo", "followers_url": "https://api.github.co...
[]
closed
false
null
[]
null
[ "The notebook forces version 0.1.0. If I use the latest, things work, I'll run the whole notebook and create a PR.\r\n\r\nBut in the meantime, this issue gets fixed by changing:\r\n`!pip install nlp==0.1.0`\r\nto\r\n`!pip install nlp`", "It still breaks very near the end\r\n\r\n![image](https://user-images.github...
1,590,792,959,000
1,591,230,065,000
1,591,230,065,000
NONE
null
null
null
When I run the notebook in Colab https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb breaks when running this cell: ![image](https://user-images.githubusercontent.com/338917/83311709-ffd1b800-a1dd-11ea-8394-3a87df0d7f8b.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/222/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/221
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/221/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/221/comments
https://api.github.com/repos/huggingface/datasets/issues/221/events
https://github.com/huggingface/datasets/pull/221
627,300,648
MDExOlB1bGxSZXF1ZXN0NDI1MTI5OTc0
221
Fix tests/test_dataset_common.py
{ "login": "tayciryahmed", "id": 13635495, "node_id": "MDQ6VXNlcjEzNjM1NDk1", "avatar_url": "https://avatars.githubusercontent.com/u/13635495?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tayciryahmed", "html_url": "https://github.com/tayciryahmed", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
[ "Thanks ! Good catch :)\r\n\r\nTo fix the CI you can do:\r\n1 - rebase from master\r\n2 - then run `make style` as specified in [CONTRIBUTING.md](https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md) ?" ]
1,590,761,535,000
1,591,014,042,000
1,590,764,543,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/221", "html_url": "https://github.com/huggingface/datasets/pull/221", "diff_url": "https://github.com/huggingface/datasets/pull/221.diff", "patch_url": "https://github.com/huggingface/datasets/pull/221.patch", "merged_at": 1590764543000 }
When I run the command `RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_arcd` while working on #220. I get the error ` unexpected keyword argument "'download_and_prepare_kwargs'"` at the level of `load_dataset`. Indeed, this [function](https://github.com/huggingface/nlp/blob/ma...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/221/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/220
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/220/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/220/comments
https://api.github.com/repos/huggingface/datasets/issues/220/events
https://github.com/huggingface/datasets/pull/220
627,280,683
MDExOlB1bGxSZXF1ZXN0NDI1MTEzMzEy
220
dataset_arcd
{ "login": "tayciryahmed", "id": 13635495, "node_id": "MDQ6VXNlcjEzNjM1NDk1", "avatar_url": "https://avatars.githubusercontent.com/u/13635495?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tayciryahmed", "html_url": "https://github.com/tayciryahmed", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
[ "you can rebase from master to fix the CI error :)", "Awesome !" ]
1,590,760,010,000
1,590,764,320,000
1,590,764,241,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/220", "html_url": "https://github.com/huggingface/datasets/pull/220", "diff_url": "https://github.com/huggingface/datasets/pull/220.diff", "patch_url": "https://github.com/huggingface/datasets/pull/220.patch", "merged_at": 1590764241000 }
Added Arabic Reading Comprehension Dataset (ARCD): https://arxiv.org/abs/1906.05394
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/220/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/220/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/219
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/219/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/219/comments
https://api.github.com/repos/huggingface/datasets/issues/219/events
https://github.com/huggingface/datasets/pull/219
627,235,893
MDExOlB1bGxSZXF1ZXN0NDI1MDc2NjQx
219
force mwparserfromhell as third party
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,590,755,597,000
1,590,759,013,000
1,590,759,012,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/219", "html_url": "https://github.com/huggingface/datasets/pull/219", "diff_url": "https://github.com/huggingface/datasets/pull/219.diff", "patch_url": "https://github.com/huggingface/datasets/pull/219.patch", "merged_at": 1590759012000 }
This should fix your env because you had `mwparserfromhell ` as a first party for `isort` @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/219/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/218
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/218/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/218/comments
https://api.github.com/repos/huggingface/datasets/issues/218/events
https://github.com/huggingface/datasets/pull/218
627,173,407
MDExOlB1bGxSZXF1ZXN0NDI1MDI2NzEz
218
Add Natual Questions and C4 scripts
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,590,748,830,000
1,590,755,461,000
1,590,755,460,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/218", "html_url": "https://github.com/huggingface/datasets/pull/218", "diff_url": "https://github.com/huggingface/datasets/pull/218.diff", "patch_url": "https://github.com/huggingface/datasets/pull/218.patch", "merged_at": 1590755460000 }
Scripts are ready ! However they are not processed nor directly available from gcp yet.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/218/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/217
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/217/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/217/comments
https://api.github.com/repos/huggingface/datasets/issues/217/events
https://github.com/huggingface/datasets/issues/217
627,128,403
MDU6SXNzdWU2MjcxMjg0MDM=
217
Multi-task dataset mixing
{ "login": "ghomasHudson", "id": 13795113, "node_id": "MDQ6VXNlcjEzNzk1MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghomasHudson", "html_url": "https://github.com/ghomasHudson", "followers_url": "https://api.github.c...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6...
open
false
null
[]
null
[ "I like this feature! I think the first question we should decide on is how to convert all datasets into the same format. In T5, the authors decided to format every dataset into a text-to-text format. If the dataset had \"multiple\" inputs like MNLI, the inputs were concatenated. So in MNLI the input:\r\n\r\n> - **...
1,590,744,146,000
1,603,701,993,000
null
CONTRIBUTOR
null
null
null
It seems like many of the best performing models on the GLUE benchmark make some use of multitask learning (simultaneous training on multiple tasks). The [T5 paper](https://arxiv.org/pdf/1910.10683.pdf) highlights multiple ways of mixing the tasks together during finetuning: - **Examples-proportional mixing** - sam...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/217/reactions", "total_count": 9, "+1": 9, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/217/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/216
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/216/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/216/comments
https://api.github.com/repos/huggingface/datasets/issues/216/events
https://github.com/huggingface/datasets/issues/216
626,896,890
MDU6SXNzdWU2MjY4OTY4OTA=
216
❓ How to get ROUGE-2 with the ROUGE metric ?
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/ast...
[]
closed
false
null
[]
null
[ "ROUGE-1 and ROUGE-L shouldn't return the same thing. This is weird", "For the rouge2 metric you can do\r\n\r\n```python\r\nrouge = nlp.load_metric('rouge')\r\nwith open(\"pred.txt\") as p, open(\"ref.txt\") as g:\r\n for lp, lg in zip(p, g):\r\n rouge.add(lp, lg)\r\nscore = rouge.compute(rouge_types=[\...
1,590,709,652,000
1,590,969,875,000
1,590,969,875,000
NONE
null
null
null
I'm trying to use ROUGE metric, but I don't know how to get the ROUGE-2 metric. --- I compute scores with : ```python import nlp rouge = nlp.load_metric('rouge') with open("pred.txt") as p, open("ref.txt") as g: for lp, lg in zip(p, g): rouge.add([lp], [lg]) score = rouge.compute() ``` ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/216/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/215
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/215/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/215/comments
https://api.github.com/repos/huggingface/datasets/issues/215/events
https://github.com/huggingface/datasets/issues/215
626,867,879
MDU6SXNzdWU2MjY4Njc4Nzk=
215
NonMatchingSplitsSizesError when loading blog_authorship_corpus
{ "login": "cedricconol", "id": 52105365, "node_id": "MDQ6VXNlcjUyMTA1MzY1", "avatar_url": "https://avatars.githubusercontent.com/u/52105365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cedricconol", "html_url": "https://github.com/cedricconol", "followers_url": "https://api.github.com/...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "I just ran it on colab and got this\r\n```\r\n[{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812,\r\ndataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train',\r\nnum_bytes=611607465, num_examples=533285, dataset_name='blog_authorship_corpus')},\r\n{'expected': SplitInf...
1,590,706,519,000
1,644,498,345,000
1,644,498,345,000
NONE
null
null
null
Getting this error when i run `nlp.load_dataset('blog_authorship_corpus')`. ``` raise NonMatchingSplitsSizesError(str(bad_splits)) nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/215/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/215/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/214
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/214/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/214/comments
https://api.github.com/repos/huggingface/datasets/issues/214/events
https://github.com/huggingface/datasets/pull/214
626,641,549
MDExOlB1bGxSZXF1ZXN0NDI0NTk1NjIx
214
[arrow_dataset.py] add new filter function
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
null
[]
null
[ "I agree that a `.filter` method would be VERY useful and appreciated. I'm not a big fan of using `flatten_nested` as it completely breaks down the structure of the example and it may create bugs. Right now I think it may not work for nested structures. Maybe there's a simpler way that we've not figured out yet.", ...
1,590,682,900,000
1,590,752,609,000
1,590,751,940,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/214", "html_url": "https://github.com/huggingface/datasets/pull/214", "diff_url": "https://github.com/huggingface/datasets/pull/214.diff", "patch_url": "https://github.com/huggingface/datasets/pull/214.patch", "merged_at": 1590751940000 }
The `.map()` function is super useful, but can IMO a bit tedious when filtering certain examples. I think, filtering out examples is also a very common operation people would like to perform on datasets. This PR is a proposal to add a `.filter()` function in the same spirit than the `.map()` function. Here is a ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/214/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/213
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/213/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/213/comments
https://api.github.com/repos/huggingface/datasets/issues/213/events
https://github.com/huggingface/datasets/pull/213
626,587,995
MDExOlB1bGxSZXF1ZXN0NDI0NTUxODE3
213
better message if missing beam options
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,590,678,417,000
1,590,745,877,000
1,590,745,876,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/213", "html_url": "https://github.com/huggingface/datasets/pull/213", "diff_url": "https://github.com/huggingface/datasets/pull/213.diff", "patch_url": "https://github.com/huggingface/datasets/pull/213.patch", "merged_at": 1590745876000 }
WDYT @yjernite ? For example: ```python dataset = nlp.load_dataset('wikipedia', '20200501.aa') ``` Raises: ``` MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to ru...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/213/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/213/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/212
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/212/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/212/comments
https://api.github.com/repos/huggingface/datasets/issues/212/events
https://github.com/huggingface/datasets/pull/212
626,580,198
MDExOlB1bGxSZXF1ZXN0NDI0NTQ1NjAy
212
have 'add' and 'add_batch' for metrics
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,590,677,807,000
1,590,748,865,000
1,590,748,864,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/212", "html_url": "https://github.com/huggingface/datasets/pull/212", "diff_url": "https://github.com/huggingface/datasets/pull/212.diff", "patch_url": "https://github.com/huggingface/datasets/pull/212.patch", "merged_at": 1590748864000 }
This should fix #116 Previously the `.add` method of metrics expected a batch of examples. Now `.add` expects one prediction/reference and `.add_batch` expects a batch. I think it is more coherent with the way the ArrowWriter works.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/212/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/212/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/211
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/211/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/211/comments
https://api.github.com/repos/huggingface/datasets/issues/211/events
https://github.com/huggingface/datasets/issues/211
626,565,994
MDU6SXNzdWU2MjY1NjU5OTQ=
211
[Arrow writer, Trivia_qa] Could not convert TagMe with type str: converting to null type
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomw...
[ { "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.gi...
null
[ "Here the full error trace:\r\n\r\n```\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-1-7aaf3f011358> in <module>\r\n 1 import nlp\r\n 2 ds = nlp.load_dataset(\"trivia_qa\", \"rc\", split=\"validation[:1%]\") # this might take 2.3 min to download but it's...
1,590,676,694,000
1,595,499,316,000
1,595,499,316,000
MEMBER
null
null
null
Running the following code ``` import nlp ds = nlp.load_dataset("trivia_qa", "rc", split="validation[:1%]") # this might take 2.3 min to download but it's cached afterwards... ds.map(lambda x: x, load_from_cache_file=False) ``` triggers a `ArrowInvalid: Could not convert TagMe with type str: converting to n...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/211/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/211/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/210
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/210/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/210/comments
https://api.github.com/repos/huggingface/datasets/issues/210/events
https://github.com/huggingface/datasets/pull/210
626,504,243
MDExOlB1bGxSZXF1ZXN0NDI0NDgyNDgz
210
fix xnli metric kwargs description
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,590,672,104,000
1,590,672,131,000
1,590,672,130,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/210", "html_url": "https://github.com/huggingface/datasets/pull/210", "diff_url": "https://github.com/huggingface/datasets/pull/210.diff", "patch_url": "https://github.com/huggingface/datasets/pull/210.patch", "merged_at": 1590672130000 }
The text was wrong as noticed in #202
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/210/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/209
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/209/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/209/comments
https://api.github.com/repos/huggingface/datasets/issues/209/events
https://github.com/huggingface/datasets/pull/209
626,405,849
MDExOlB1bGxSZXF1ZXN0NDI0NDAwOTc4
209
Add a Google Drive exception for small files
{ "login": "airKlizz", "id": 25703835, "node_id": "MDQ6VXNlcjI1NzAzODM1", "avatar_url": "https://avatars.githubusercontent.com/u/25703835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/airKlizz", "html_url": "https://github.com/airKlizz", "followers_url": "https://api.github.com/users/air...
[]
closed
false
null
[]
null
[ "Can you run the style formatting tools to pass the code quality test?\r\n\r\nYou can find all the details in CONTRIBUTING.md: https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-contribute-to-nlp", "Nice ! ", "``make style`` done! Thanks for the approvals." ]
1,590,662,417,000
1,590,678,904,000
1,590,678,904,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/209", "html_url": "https://github.com/huggingface/datasets/pull/209", "diff_url": "https://github.com/huggingface/datasets/pull/209.diff", "patch_url": "https://github.com/huggingface/datasets/pull/209.patch", "merged_at": 1590678904000 }
I tried to use the ``nlp`` library to load personnal datasets. I mainly copy-paste the code for ``multi-news`` dataset because my files are stored on Google Drive. One of my dataset is small (< 25Mo) so it can be verified by Drive without asking the authorization to the user. This makes the download starts directly...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/209/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/209/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/208
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/208/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/208/comments
https://api.github.com/repos/huggingface/datasets/issues/208/events
https://github.com/huggingface/datasets/pull/208
626,398,519
MDExOlB1bGxSZXF1ZXN0NDI0Mzk0ODIx
208
[Dummy data] insert config name instead of config
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
null
[]
null
[]
1,590,661,699,000
1,590,670,081,000
1,590,670,080,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/208", "html_url": "https://github.com/huggingface/datasets/pull/208", "diff_url": "https://github.com/huggingface/datasets/pull/208.diff", "patch_url": "https://github.com/huggingface/datasets/pull/208.patch", "merged_at": 1590670080000 }
Thanks @yjernite for letting me know. in the dummy data command the config name shuold be passed to the dataset builder and not the config itself. Also, @lhoestq fixed small import bug introduced by beam command I think.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/208/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/207
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/207/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/207/comments
https://api.github.com/repos/huggingface/datasets/issues/207/events
https://github.com/huggingface/datasets/issues/207
625,932,200
MDU6SXNzdWU2MjU5MzIyMDA=
207
Remove test set from NLP viewer
{ "login": "chrisdonahue", "id": 748399, "node_id": "MDQ6VXNlcjc0ODM5OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/748399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chrisdonahue", "html_url": "https://github.com/chrisdonahue", "followers_url": "https://api.github.com/u...
[ { "id": 2107841032, "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer", "name": "nlp-viewer", "color": "94203D", "default": false, "description": "" } ]
closed
false
null
[]
null
[ "~is the viewer also open source?~\r\n[is a streamlit app!](https://docs.streamlit.io/en/latest/getting_started.html)", "Appears that [two thirds of those polled on Twitter](https://twitter.com/srush_nlp/status/1265734497632477185) are in favor of _some_ mechanism for averting eyeballs from the test data.", "We...
1,590,604,327,000
1,644,499,065,000
1,644,499,065,000
NONE
null
null
null
While the new [NLP viewer](https://huggingface.co/nlp/viewer/) is a great tool, I think it would be best to outright remove the option of looking at the test sets. At the very least, a warning should be displayed to users before showing the test set. Newcomers to the field might not be aware of best practices, and smal...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/207/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/207/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/206/comments
https://api.github.com/repos/huggingface/datasets/issues/206/events
https://github.com/huggingface/datasets/issues/206
625,842,989
MDU6SXNzdWU2MjU4NDI5ODk=
206
[Question] Combine 2 datasets which have the same columns
{ "login": "airKlizz", "id": 25703835, "node_id": "MDQ6VXNlcjI1NzAzODM1", "avatar_url": "https://avatars.githubusercontent.com/u/25703835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/airKlizz", "html_url": "https://github.com/airKlizz", "followers_url": "https://api.github.com/users/air...
[]
closed
false
null
[]
null
[ "We are thinking about ways to combine datasets for T5 in #217, feel free to share your thoughts about this.", "Ok great! I will look at it. Thanks" ]
1,590,596,752,000
1,591,780,274,000
1,591,780,274,000
CONTRIBUTOR
null
null
null
Hi, I am using ``nlp`` to load personal datasets. I created summarization datasets in multi-languages based on wikinews. I have one dataset for english and one for german (french is getting to be ready as well). I want to keep these datasets independent because they need different pre-processing (add different task-...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/206/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/205
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/205/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/205/comments
https://api.github.com/repos/huggingface/datasets/issues/205/events
https://github.com/huggingface/datasets/pull/205
625,839,335
MDExOlB1bGxSZXF1ZXN0NDIzOTY2ODE1
205
Better arrow dataset iter
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,590,596,421,000
1,590,597,598,000
1,590,597,596,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/205", "html_url": "https://github.com/huggingface/datasets/pull/205", "diff_url": "https://github.com/huggingface/datasets/pull/205.diff", "patch_url": "https://github.com/huggingface/datasets/pull/205.patch", "merged_at": 1590597596000 }
I tried to play around with `tf.data.Dataset.from_generator` and I found out that the `__iter__` that we have for `nlp.arrow_dataset.Dataset` ignores the format that has been set (torch or tensorflow). With these changes I should be able to come up with a `tf.data.Dataset` that uses lazy loading, as asked in #193.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/205/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/204
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/204/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/204/comments
https://api.github.com/repos/huggingface/datasets/issues/204/events
https://github.com/huggingface/datasets/pull/204
625,655,849
MDExOlB1bGxSZXF1ZXN0NDIzODE5MTQw
204
Add Dataflow support + Wikipedia + Wiki40b
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,590,582,769,000
1,590,653,435,000
1,590,653,434,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/204", "html_url": "https://github.com/huggingface/datasets/pull/204", "diff_url": "https://github.com/huggingface/datasets/pull/204.diff", "patch_url": "https://github.com/huggingface/datasets/pull/204.patch", "merged_at": 1590653434000 }
# Add Dataflow support + Wikipedia + Wiki40b ## Support datasets processing with Apache Beam Some datasets are too big to be processed on a single machine, for example: wikipedia, wiki40b, etc. Apache Beam allows to process datasets on many execution engines like Dataflow, Spark, Flink, etc. To process such da...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/204/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/203
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/203/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/203/comments
https://api.github.com/repos/huggingface/datasets/issues/203/events
https://github.com/huggingface/datasets/pull/203
625,515,488
MDExOlB1bGxSZXF1ZXN0NDIzNzEyMTQ3
203
Raise an error if no config name for datasets like glue
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,590,570,238,000
1,590,597,639,000
1,590,597,638,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/203", "html_url": "https://github.com/huggingface/datasets/pull/203", "diff_url": "https://github.com/huggingface/datasets/pull/203.diff", "patch_url": "https://github.com/huggingface/datasets/pull/203.patch", "merged_at": 1590597638000 }
Some datasets like glue (see #130) and scientific_papers (see #197) have many configs. For example for glue there are cola, sst2, mrpc etc. Currently if a user does `load_dataset('glue')`, then Cola is loaded by default and it can be confusing. Instead, we should raise an error to let the user know that he has to p...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/203/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/202
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/202/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/202/comments
https://api.github.com/repos/huggingface/datasets/issues/202/events
https://github.com/huggingface/datasets/issues/202
625,493,983
MDU6SXNzdWU2MjU0OTM5ODM=
202
Mistaken `_KWARGS_DESCRIPTION` for XNLI metric
{ "login": "phiyodr", "id": 33572125, "node_id": "MDQ6VXNlcjMzNTcyMTI1", "avatar_url": "https://avatars.githubusercontent.com/u/33572125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phiyodr", "html_url": "https://github.com/phiyodr", "followers_url": "https://api.github.com/users/phiyod...
[]
closed
false
null
[]
null
[ "Indeed, good catch ! thanks\r\nFixing it right now" ]
1,590,568,482,000
1,590,672,156,000
1,590,672,156,000
NONE
null
null
null
Hi! The [`_KWARGS_DESCRIPTION`](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/xnli/xnli.py#L45) for the XNLI metric uses `Args` and `Returns` text from [BLEU](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/bleu/bleu.py#L58) metric: ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/202/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/202/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/201
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/201/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/201/comments
https://api.github.com/repos/huggingface/datasets/issues/201/events
https://github.com/huggingface/datasets/pull/201
625,235,430
MDExOlB1bGxSZXF1ZXN0NDIzNDkzNTMw
201
Fix typo in README
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
[ "Amazing, @LysandreJik!", "Really did my best!" ]
1,590,531,501,000
1,590,536,431,000
1,590,534,056,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/201", "html_url": "https://github.com/huggingface/datasets/pull/201", "diff_url": "https://github.com/huggingface/datasets/pull/201.diff", "patch_url": "https://github.com/huggingface/datasets/pull/201.patch", "merged_at": 1590534056000 }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/201/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/201/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/200
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/200/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/200/comments
https://api.github.com/repos/huggingface/datasets/issues/200/events
https://github.com/huggingface/datasets/pull/200
625,226,638
MDExOlB1bGxSZXF1ZXN0NDIzNDg2NTM0
200
[ArrowWriter] Set schema at first write example
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "Good point!\r\n\r\nI guess we could add this to `write_batch` as well (before using `self._schema` in the first line of this method)?" ]
1,590,530,388,000
1,590,570,474,000
1,590,570,473,000
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/200", "html_url": "https://github.com/huggingface/datasets/pull/200", "diff_url": "https://github.com/huggingface/datasets/pull/200.diff", "patch_url": "https://github.com/huggingface/datasets/pull/200.patch", "merged_at": 1590570473000 }
Right now if the schema was not specified when instantiating `ArrowWriter`, then it could be set with the first `write_table` for example (it calls `self._build_writer()` to do so). I noticed that it was not done if the first example is added via `.write`, so I added it for coherence.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/200/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/200/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/199
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/199/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/199/comments
https://api.github.com/repos/huggingface/datasets/issues/199/events
https://github.com/huggingface/datasets/pull/199
625,217,440
MDExOlB1bGxSZXF1ZXN0NDIzNDc4ODIx
199
Fix GermEval 2014 dataset infos
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[ "Hopefully. this also fixes the dataset view on https://huggingface.co/nlp/viewer/ :)", "Oh good catch ! This should fix it indeed" ]
1,590,529,304,000
1,590,529,824,000
1,590,529,824,000
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/199", "html_url": "https://github.com/huggingface/datasets/pull/199", "diff_url": "https://github.com/huggingface/datasets/pull/199.diff", "patch_url": "https://github.com/huggingface/datasets/pull/199.patch", "merged_at": 1590529824000 }
Hi, this PR just removes the `dataset_info.json` file and adds a newly generated `dataset_infos.json` file.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/199/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/199/timeline
null
null
true