url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
2.04B
node_id
stringlengths
18
32
number
int64
1
6.5k
title
stringlengths
1
290
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
comments
list
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
draft
bool
2 classes
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/2100
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2100/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2100/comments
https://api.github.com/repos/huggingface/datasets/issues/2100/events
https://github.com/huggingface/datasets/pull/2100
838,574,631
MDExOlB1bGxSZXF1ZXN0NTk4NzMzOTM0
2,100
Fix deprecated warning message and docstring
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
[ "I have a question: what about `dictionary_encode_column_`?\r\n- It is deprecated in Dataset, but it recommends using a non-existing method instead: `Dataset.dictionary_encode_column` does not exist.\r\n- It is NOT deprecated in DatasetDict.", "`dictionary_encode_column_ ` should be deprecated since it never work...
2021-03-23T10:27:52
2021-03-24T08:19:41
2021-03-23T18:03:49
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2100", "html_url": "https://github.com/huggingface/datasets/pull/2100", "diff_url": "https://github.com/huggingface/datasets/pull/2100.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2100.patch", "merged_at": "2021-03-23T18:03:49" }
Fix deprecated warnings: - Use deprecated Sphinx directive in docstring - Fix format of deprecated message - Raise FutureWarning
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2100/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2099
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2099/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2099/comments
https://api.github.com/repos/huggingface/datasets/issues/2099/events
https://github.com/huggingface/datasets/issues/2099
838,523,819
MDU6SXNzdWU4Mzg1MjM4MTk=
2,099
load_from_disk takes a long time to load local dataset
{ "login": "samsontmr", "id": 15007950, "node_id": "MDQ6VXNlcjE1MDA3OTUw", "avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/samsontmr", "html_url": "https://github.com/samsontmr", "followers_url": "https://api.github.com/users/samsontmr/followers", "following_url": "https://api.github.com/users/samsontmr/following{/other_user}", "gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}", "starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions", "organizations_url": "https://api.github.com/users/samsontmr/orgs", "repos_url": "https://api.github.com/users/samsontmr/repos", "events_url": "https://api.github.com/users/samsontmr/events{/privacy}", "received_events_url": "https://api.github.com/users/samsontmr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi !\r\nCan you share more information about the features of your dataset ? You can get them by printing `my_dataset.features`\r\nCan you also share the code of your `map` function ?", "It is actually just the tokenized `wikipedia` dataset with `input_ids`, `attention_mask`, etc, with one extra column which is a...
2021-03-23T09:28:37
2021-03-23T17:12:16
2021-03-23T17:12:16
NONE
null
null
null
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helping (the total size seems to be smaller though). Does anyone know what could be the issue? Or does the casting of that column to `int8` need to happen in the function that writes the arrow table instead of in the `map` where I create the list of integers? Tagging @lhoestq since you seem to be working on these issues and PRs :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2099/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2098
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2098/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2098/comments
https://api.github.com/repos/huggingface/datasets/issues/2098/events
https://github.com/huggingface/datasets/issues/2098
838,447,959
MDU6SXNzdWU4Mzg0NDc5NTk=
2,098
SQuAD version
{ "login": "h-peng17", "id": 39556019, "node_id": "MDQ6VXNlcjM5NTU2MDE5", "avatar_url": "https://avatars.githubusercontent.com/u/39556019?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h-peng17", "html_url": "https://github.com/h-peng17", "followers_url": "https://api.github.com/users/h-peng17/followers", "following_url": "https://api.github.com/users/h-peng17/following{/other_user}", "gists_url": "https://api.github.com/users/h-peng17/gists{/gist_id}", "starred_url": "https://api.github.com/users/h-peng17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h-peng17/subscriptions", "organizations_url": "https://api.github.com/users/h-peng17/orgs", "repos_url": "https://api.github.com/users/h-peng17/repos", "events_url": "https://api.github.com/users/h-peng17/events{/privacy}", "received_events_url": "https://api.github.com/users/h-peng17/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi ! This is 1.1 as specified by the download urls here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/datasets/squad/squad.py#L50-L55", "Got it. Thank you~" ]
2021-03-23T07:47:54
2021-03-26T09:48:54
2021-03-26T09:48:54
NONE
null
null
null
Hi~ I want train on squad dataset. What's the version of the squad? Is it 1.1 or 1.0? I'm new in QA, I don't find some descriptions about it.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2098/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2098/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2097
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2097/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2097/comments
https://api.github.com/repos/huggingface/datasets/issues/2097/events
https://github.com/huggingface/datasets/pull/2097
838,105,289
MDExOlB1bGxSZXF1ZXN0NTk4MzM4MTA3
2,097
fixes issue #1110 by descending further if `obj["_type"]` is a dict
{ "login": "dcfidalgo", "id": 15979778, "node_id": "MDQ6VXNlcjE1OTc5Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dcfidalgo", "html_url": "https://github.com/dcfidalgo", "followers_url": "https://api.github.com/users/dcfidalgo/followers", "following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}", "gists_url": "https://api.github.com/users/dcfidalgo/gists{/gist_id}", "starred_url": "https://api.github.com/users/dcfidalgo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dcfidalgo/subscriptions", "organizations_url": "https://api.github.com/users/dcfidalgo/orgs", "repos_url": "https://api.github.com/users/dcfidalgo/repos", "events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}", "received_events_url": "https://api.github.com/users/dcfidalgo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-22T21:00:55
2021-03-22T21:01:11
2021-03-22T21:01:11
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2097", "html_url": "https://github.com/huggingface/datasets/pull/2097", "diff_url": "https://github.com/huggingface/datasets/pull/2097.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2097.patch", "merged_at": null }
Check metrics
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2097/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2097/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2096
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2096/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2096/comments
https://api.github.com/repos/huggingface/datasets/issues/2096/events
https://github.com/huggingface/datasets/issues/2096
838,038,379
MDU6SXNzdWU4MzgwMzgzNzk=
2,096
CoNLL 2003 dataset not including German
{ "login": "rxian", "id": 8406802, "node_id": "MDQ6VXNlcjg0MDY4MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/8406802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rxian", "html_url": "https://github.com/rxian", "followers_url": "https://api.github.com/users/rxian/followers", "following_url": "https://api.github.com/users/rxian/following{/other_user}", "gists_url": "https://api.github.com/users/rxian/gists{/gist_id}", "starred_url": "https://api.github.com/users/rxian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rxian/subscriptions", "organizations_url": "https://api.github.com/users/rxian/orgs", "repos_url": "https://api.github.com/users/rxian/repos", "events_url": "https://api.github.com/users/rxian/events{/privacy}", "received_events_url": "https://api.github.com/users/rxian/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[ "Hello. I've been looking for information about German Conll2003 and found your question. Official site (https://www.clips.uantwerpen.be/conll2003/ner/) mentions that organizers provide only annotation. German texts (ECI Multilingual Text Corpus) are not freely available and can be ordered from the Linguistic Data ...
2021-03-22T19:23:56
2023-07-25T16:49:07
2023-07-25T16:49:07
NONE
null
null
null
Hello, thanks for all the work on developing and maintaining this amazing platform, which I am enjoying working with! I was wondering if there is a reason why the German CoNLL 2003 dataset is not included in the [repository](https://github.com/huggingface/datasets/tree/master/datasets/conll2003), since a copy of it could be found in some places on the internet such as GitHub? I could help adding the German data to the hub, unless there are some copyright issues that I am unaware of... This is considering that many work use the union of CoNLL 2002 and 2003 datasets for comparing cross-lingual NER transfer performance in `en`, `de`, `es`, and `nl`. E.g., [XLM-R](https://www.aclweb.org/anthology/2020.acl-main.747.pdf). ## Adding a Dataset - **Name:** CoNLL 2003 German - **Paper:** https://www.aclweb.org/anthology/W03-0419/ - **Data:** https://github.com/huggingface/datasets/tree/master/datasets/conll2003
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2096/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2096/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2093
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2093/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2093/comments
https://api.github.com/repos/huggingface/datasets/issues/2093/events
https://github.com/huggingface/datasets/pull/2093
837,209,211
MDExOlB1bGxSZXF1ZXN0NTk3NTgyNjUx
2,093
Fix: Allows a feature to be named "_type"
{ "login": "dcfidalgo", "id": 15979778, "node_id": "MDQ6VXNlcjE1OTc5Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dcfidalgo", "html_url": "https://github.com/dcfidalgo", "followers_url": "https://api.github.com/users/dcfidalgo/followers", "following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}", "gists_url": "https://api.github.com/users/dcfidalgo/gists{/gist_id}", "starred_url": "https://api.github.com/users/dcfidalgo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dcfidalgo/subscriptions", "organizations_url": "https://api.github.com/users/dcfidalgo/orgs", "repos_url": "https://api.github.com/users/dcfidalgo/repos", "events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}", "received_events_url": "https://api.github.com/users/dcfidalgo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Nice thank you !\r\nThis looks like a pretty simple yet effective fix ;)\r\nCould you just add a test in `test_features.py` to make sure that you can create `features` with a `_type` field and that it is possible to convert it as a dict and reload it ?\r\n```python\r\nfrom datasets import Features, Value\r\n\r\n# ...
2021-03-21T23:21:57
2021-03-25T14:35:54
2021-03-25T14:35:54
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2093", "html_url": "https://github.com/huggingface/datasets/pull/2093", "diff_url": "https://github.com/huggingface/datasets/pull/2093.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2093.patch", "merged_at": "2021-03-25T14:35:54" }
This PR tries to fix issue #1110. Sorry for taking so long to come back to this. It's a simple fix, but i am not sure if it works for all possible types of `obj`. Let me know what you think @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2093/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2093/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2092
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2092/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2092/comments
https://api.github.com/repos/huggingface/datasets/issues/2092/events
https://github.com/huggingface/datasets/issues/2092
836,984,043
MDU6SXNzdWU4MzY5ODQwNDM=
2,092
How to disable making arrow tables in load_dataset ?
{ "login": "Jeevesh8", "id": 48825663, "node_id": "MDQ6VXNlcjQ4ODI1NjYz", "avatar_url": "https://avatars.githubusercontent.com/u/48825663?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jeevesh8", "html_url": "https://github.com/Jeevesh8", "followers_url": "https://api.github.com/users/Jeevesh8/followers", "following_url": "https://api.github.com/users/Jeevesh8/following{/other_user}", "gists_url": "https://api.github.com/users/Jeevesh8/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jeevesh8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jeevesh8/subscriptions", "organizations_url": "https://api.github.com/users/Jeevesh8/orgs", "repos_url": "https://api.github.com/users/Jeevesh8/repos", "events_url": "https://api.github.com/users/Jeevesh8/events{/privacy}", "received_events_url": "https://api.github.com/users/Jeevesh8/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi ! We plan to add streaming features in the future.\r\n\r\nThis should allow to load a dataset instantaneously without generating the arrow table. The trade-off is that accessing examples from a streaming dataset must be done in an iterative way, and with an additional (but hopefully minor) overhead.\r\nWhat do ...
2021-03-21T04:50:07
2022-06-01T16:49:52
2022-06-01T16:49:52
NONE
null
null
null
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2092/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2092/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2091
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2091/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2091/comments
https://api.github.com/repos/huggingface/datasets/issues/2091/events
https://github.com/huggingface/datasets/pull/2091
836,831,403
MDExOlB1bGxSZXF1ZXN0NTk3Mjk4ODI3
2,091
Fix copy snippet in docs
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
[]
2021-03-20T15:08:22
2021-03-24T08:20:50
2021-03-23T17:18:31
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2091", "html_url": "https://github.com/huggingface/datasets/pull/2091", "diff_url": "https://github.com/huggingface/datasets/pull/2091.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2091.patch", "merged_at": "2021-03-23T17:18:31" }
With this change the lines starting with `...` in the code blocks can be properly copied to clipboard.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2091/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2091/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2090
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2090/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2090/comments
https://api.github.com/repos/huggingface/datasets/issues/2090/events
https://github.com/huggingface/datasets/pull/2090
836,807,498
MDExOlB1bGxSZXF1ZXN0NTk3MjgwNTEy
2,090
Add machine translated multilingual STS benchmark dataset
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello dear maintainer, are there any comments or questions about this PR?", "@iamollas thanks for the feedback. I did not see the template.\r\nI improved it...", "Should be clean for merge IMO.", "@lhoestq CI is green. ;-)", "Thanks again ! this is awesome :)", "Thanks for merging. :-)" ]
2021-03-20T13:28:07
2021-03-29T13:24:42
2021-03-29T13:00:15
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2090", "html_url": "https://github.com/huggingface/datasets/pull/2090", "diff_url": "https://github.com/huggingface/datasets/pull/2090.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2090.patch", "merged_at": "2021-03-29T13:00:15" }
also see here https://github.com/PhilipMay/stsb-multi-mt
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2090/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2090/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2089
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2089/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2089/comments
https://api.github.com/repos/huggingface/datasets/issues/2089/events
https://github.com/huggingface/datasets/issues/2089
836,788,019
MDU6SXNzdWU4MzY3ODgwMTk=
2,089
Add documentaton for dataset README.md files
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi ! We are using the [datasets-tagging app](https://github.com/huggingface/datasets-tagging) to select the tags to add.\r\n\r\nWe are also adding the full list of tags in #2107 \r\nThis covers multilinguality, language_creators, licenses, size_categories and task_categories.\r\n\r\nIn general if you want to add a...
2021-03-20T11:44:38
2023-07-25T16:45:38
2023-07-25T16:45:37
CONTRIBUTOR
null
null
null
Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what should be passed to language_creators? - which values should licenses have? What do I say when it is a custom license? Should I add a link? - how should I choose size_categories ? What are valid ranges? - what are valid task_categories? Thanks Philip
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2089/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2089/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2088
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2088/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2088/comments
https://api.github.com/repos/huggingface/datasets/issues/2088/events
https://github.com/huggingface/datasets/pull/2088
836,763,733
MDExOlB1bGxSZXF1ZXN0NTk3MjQ4Mzk1
2,088
change bibtex template to author instead of authors
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Trailing whitespace was removed. So more changes in diff than just this fix." ]
2021-03-20T09:23:44
2021-03-23T15:40:12
2021-03-23T15:40:12
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2088", "html_url": "https://github.com/huggingface/datasets/pull/2088", "diff_url": "https://github.com/huggingface/datasets/pull/2088.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2088.patch", "merged_at": "2021-03-23T15:40:12" }
Hi, IMO when using BibTex Author should be used instead of Authors. See here: http://www.bibtex.org/Using/de/ Thanks Philip
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2088/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2088/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2087
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2087/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2087/comments
https://api.github.com/repos/huggingface/datasets/issues/2087/events
https://github.com/huggingface/datasets/pull/2087
836,587,392
MDExOlB1bGxSZXF1ZXN0NTk3MDg4NTk2
2,087
Update metadata if dataset features are modified
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@lhoestq I'll try to add a test later if you think this approach with the wrapper is good.", "Awesome thank you !\r\nYes this approach with a wrapper is good :)", "@lhoestq Added a test. To verify that this change fixes the problem, replace:\r\n```\r\n!pip install datasets==1.5\r\n```\r\nwith:\r\n```\r\n!pip i...
2021-03-20T02:05:23
2021-04-09T09:25:33
2021-04-09T09:25:33
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2087", "html_url": "https://github.com/huggingface/datasets/pull/2087", "diff_url": "https://github.com/huggingface/datasets/pull/2087.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2087.patch", "merged_at": "2021-04-09T09:25:33" }
This PR adds a decorator that updates the dataset metadata if a previously executed transform modifies its features. Fixes #2083
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2087/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2087/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2086
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2086/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2086/comments
https://api.github.com/repos/huggingface/datasets/issues/2086/events
https://github.com/huggingface/datasets/pull/2086
836,249,587
MDExOlB1bGxSZXF1ZXN0NTk2Nzg0Mjcz
2,086
change user permissions to -rw-r--r--
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I tried this with `ade_corpus_v2` dataset. `ade_corpus_v2-train.arrow` (downloaded dataset) and `cache-25d41a4d3c2d8a25.arrow` (ran a mapping function on the dataset) both had file permission with octal value of `0644`. " ]
2021-03-19T18:14:56
2021-03-24T13:59:04
2021-03-24T13:59:04
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2086", "html_url": "https://github.com/huggingface/datasets/pull/2086", "diff_url": "https://github.com/huggingface/datasets/pull/2086.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2086.patch", "merged_at": "2021-03-24T13:59:04" }
Fix for #2065
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2086/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2086/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2085
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2085/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2085/comments
https://api.github.com/repos/huggingface/datasets/issues/2085/events
https://github.com/huggingface/datasets/pull/2085
835,870,994
MDExOlB1bGxSZXF1ZXN0NTk2NDYyOTc2
2,085
Fix max_wait_time in requests
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-19T11:22:26
2021-03-23T15:36:38
2021-03-23T15:36:37
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2085", "html_url": "https://github.com/huggingface/datasets/pull/2085", "diff_url": "https://github.com/huggingface/datasets/pull/2085.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2085.patch", "merged_at": "2021-03-23T15:36:37" }
it was handled as a min time, not max cc @SBrandeis
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2085/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2085/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2084
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2084/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2084/comments
https://api.github.com/repos/huggingface/datasets/issues/2084/events
https://github.com/huggingface/datasets/issues/2084
835,750,671
MDU6SXNzdWU4MzU3NTA2NzE=
2,084
CUAD - Contract Understanding Atticus Dataset
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[ "+1 on this request" ]
2021-03-19T09:27:43
2021-04-16T08:50:44
2021-04-16T08:50:44
CONTRIBUTOR
null
null
null
## Adding a Dataset - **Name:** CUAD - Contract Understanding Atticus Dataset - **Description:** As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community. - **Paper:** https://arxiv.org/abs/2103.06268 - **Data:** https://github.com/TheAtticusProject/cuad/ - **Motivation:** good domain specific datasets are valuable Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2084/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2084/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2083
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2083/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2083/comments
https://api.github.com/repos/huggingface/datasets/issues/2083/events
https://github.com/huggingface/datasets/issues/2083
835,695,425
MDU6SXNzdWU4MzU2OTU0MjU=
2,083
`concatenate_datasets` throws error when changing the order of datasets to concatenate
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nthis bug is related to `Dataset.{remove_columns, rename_column, flatten}` not propagating the change to the schema metadata when the info features are updated, so this line is the culprit:\r\n```python\r\ncommon_voice_train = common_voice_train.remove_columns(['client_id', 'up_votes', 'down_votes', 'age...
2021-03-19T08:29:48
2021-04-09T09:25:33
2021-04-09T09:25:33
MEMBER
null
null
null
Hey, I played around with the `concatenate_datasets(...)` function: https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate_datasets#datasets.concatenate_datasets and noticed that when the order in which the datasets are concatenated changes an error is thrown where it should not IMO. Here is a google colab to reproduce the error: https://colab.research.google.com/drive/17VTFU4KQ735-waWZJjeOHS6yDTfV5ekK?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2083/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2083/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2082
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2082/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2082/comments
https://api.github.com/repos/huggingface/datasets/issues/2082/events
https://github.com/huggingface/datasets/pull/2082
835,401,555
MDExOlB1bGxSZXF1ZXN0NTk2MDY1NTM0
2,082
Updated card using information from data statement and datasheet
{ "login": "mcmillanmajora", "id": 26722925, "node_id": "MDQ6VXNlcjI2NzIyOTI1", "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mcmillanmajora", "html_url": "https://github.com/mcmillanmajora", "followers_url": "https://api.github.com/users/mcmillanmajora/followers", "following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}", "gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}", "starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions", "organizations_url": "https://api.github.com/users/mcmillanmajora/orgs", "repos_url": "https://api.github.com/users/mcmillanmajora/repos", "events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}", "received_events_url": "https://api.github.com/users/mcmillanmajora/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-19T00:39:38
2021-03-19T14:29:09
2021-03-19T14:29:09
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2082", "html_url": "https://github.com/huggingface/datasets/pull/2082", "diff_url": "https://github.com/huggingface/datasets/pull/2082.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2082.patch", "merged_at": "2021-03-19T14:29:08" }
I updated and clarified the REFreSD [data card](https://github.com/mcmillanmajora/datasets/blob/refresd_card/datasets/refresd/README.md) with information from the Eleftheria's [website](https://elbria.github.io/post/refresd/). I added brief descriptions where the initial card referred to the paper, and I also recreated some of the tables in the paper to show relevant dataset statistics. I'll email Eleftheria to see if she has any comments on the card.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2082/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2082/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2081
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2081/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2081/comments
https://api.github.com/repos/huggingface/datasets/issues/2081/events
https://github.com/huggingface/datasets/pull/2081
835,112,968
MDExOlB1bGxSZXF1ZXN0NTk1ODE3OTM4
2,081
Fix docstrings issues
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
[]
2021-03-18T18:11:01
2021-04-07T14:37:43
2021-04-07T14:37:43
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2081", "html_url": "https://github.com/huggingface/datasets/pull/2081", "diff_url": "https://github.com/huggingface/datasets/pull/2081.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2081.patch", "merged_at": "2021-04-07T14:37:43" }
Fix docstring issues.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2081/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2081/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2080/comments
https://api.github.com/repos/huggingface/datasets/issues/2080/events
https://github.com/huggingface/datasets/issues/2080
835,023,000
MDU6SXNzdWU4MzUwMjMwMDA=
2,080
Multidimensional arrays in a Dataset
{ "login": "vermouthmjl", "id": 3142085, "node_id": "MDQ6VXNlcjMxNDIwODU=", "avatar_url": "https://avatars.githubusercontent.com/u/3142085?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vermouthmjl", "html_url": "https://github.com/vermouthmjl", "followers_url": "https://api.github.com/users/vermouthmjl/followers", "following_url": "https://api.github.com/users/vermouthmjl/following{/other_user}", "gists_url": "https://api.github.com/users/vermouthmjl/gists{/gist_id}", "starred_url": "https://api.github.com/users/vermouthmjl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vermouthmjl/subscriptions", "organizations_url": "https://api.github.com/users/vermouthmjl/orgs", "repos_url": "https://api.github.com/users/vermouthmjl/repos", "events_url": "https://api.github.com/users/vermouthmjl/events{/privacy}", "received_events_url": "https://api.github.com/users/vermouthmjl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi !\r\n\r\nThis is actually supported ! but not yet in `from_pandas`.\r\nYou can use `from_dict` for now instead:\r\n```python\r\nfrom datasets import Dataset, Array2D, Features, Value\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\ndataset = {\r\n 'bbox': [\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,...
2021-03-18T16:29:14
2021-03-25T12:46:53
2021-03-25T12:46:53
NONE
null
null
null
Hi, I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional arrays as a value for a column in a row. The following code results in conversion error in pyarrow (`pyarrow.lib.ArrowInvalid: ('Can only convert 1-dimensional array values', 'Conversion failed for column bbox with type object')`) ``` from datasets import Dataset import pandas as pd import numpy as np dataset = pd.DataFrame({ 'bbox': [ np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]), np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]), np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]), np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]) ], 'input_ids': [1, 2, 3, 4] }) dataset = Dataset.from_pandas(dataset) ``` Since I wanted to use pytorch for the downstream training task, I also tried a few ways to directly put in a column of 2-D pytorch tensor in a formatted dataset, but I can only have a list of 1-D tensors, or a list of arrays, or a list of lists. ``` import torch from datasets import Dataset import pandas as pd dataset = pd.DataFrame({ 'bbox': [ [[1,2,3,4],[1,2,3,4],[1,2,3,4]], [[1,2,3,4],[1,2,3,4],[1,2,3,4]], [[1,2,3,4],[1,2,3,4],[1,2,3,4]], [[1,2,3,4],[1,2,3,4],[1,2,3,4]] ], 'input_ids': [1, 2, 3, 4] }) dataset = Dataset.from_pandas(dataset) def test(examples): return {'bbbox': torch.Tensor(examples['bbox'])} dataset = dataset.map(test) print(dataset[0]['bbox']) print(dataset[0]['bbbox']) dataset.set_format(type='torch', columns=['input_ids', 'bbox'], output_all_columns=True) print(dataset[0]['bbox']) print(dataset[0]['bbbox']) def test2(examples): return {'bbbox': torch.stack(examples['bbox'])} dataset = dataset.map(test2) print(dataset[0]['bbox']) print(dataset[0]['bbbox']) ``` Is is possible to support n-D arrays/tensors in datasets? It seems that it can also be useful for this [feature request](https://github.com/huggingface/datasets/issues/263).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2080/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2079
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2079/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2079/comments
https://api.github.com/repos/huggingface/datasets/issues/2079/events
https://github.com/huggingface/datasets/pull/2079
834,920,493
MDExOlB1bGxSZXF1ZXN0NTk1NjU2MDQ5
2,079
Refactorize Metric.compute signature to force keyword arguments only
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-18T15:05:50
2021-03-23T15:31:44
2021-03-23T15:31:44
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2079", "html_url": "https://github.com/huggingface/datasets/pull/2079", "diff_url": "https://github.com/huggingface/datasets/pull/2079.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2079.patch", "merged_at": "2021-03-23T15:31:44" }
Minor refactoring of Metric.compute signature to force the use of keyword arguments, by using the single star syntax.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2079/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2079/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2078
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2078/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2078/comments
https://api.github.com/repos/huggingface/datasets/issues/2078/events
https://github.com/huggingface/datasets/issues/2078
834,694,819
MDU6SXNzdWU4MzQ2OTQ4MTk=
2,078
MemoryError when computing WER metric
{ "login": "diego-fustes", "id": 5707233, "node_id": "MDQ6VXNlcjU3MDcyMzM=", "avatar_url": "https://avatars.githubusercontent.com/u/5707233?v=4", "gravatar_id": "", "url": "https://api.github.com/users/diego-fustes", "html_url": "https://github.com/diego-fustes", "followers_url": "https://api.github.com/users/diego-fustes/followers", "following_url": "https://api.github.com/users/diego-fustes/following{/other_user}", "gists_url": "https://api.github.com/users/diego-fustes/gists{/gist_id}", "starred_url": "https://api.github.com/users/diego-fustes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/diego-fustes/subscriptions", "organizations_url": "https://api.github.com/users/diego-fustes/orgs", "repos_url": "https://api.github.com/users/diego-fustes/repos", "events_url": "https://api.github.com/users/diego-fustes/events{/privacy}", "received_events_url": "https://api.github.com/users/diego-fustes/received_events", "type": "User", "site_admin": false }
[ { "id": 2067393914, "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug", "name": "metric bug", "color": "25b21e", "default": false, "description": "A bug in a metric script" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
[ "Hi ! Thanks for reporting.\r\nWe're indeed using `jiwer` to compute the WER.\r\n\r\nMaybe instead of calling `jiwer.wer` once for all the preditions/references we can compute the WER iteratively to avoid memory issues ? I'm not too familial with `jiwer` but this must be possible.\r\n\r\nCurrently the code to compu...
2021-03-18T11:30:05
2021-05-01T08:31:49
2021-04-06T07:20:43
NONE
null
null
null
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File "/home/diego/IpGlobal/wav2vec/test_wav2vec.py", line 51, in <module> print(wer.compute(predictions=result["predicted"], references=result["target"])) File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/datasets/metric.py", line 403, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/diego/.cache/huggingface/modules/datasets_modules/metrics/wer/73b2d32b723b7fb8f204d785c00980ae4d937f12a65466f8fdf78706e2951281/wer.py", line 94, in _compute return wer(references, predictions) File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 81, in wer truth, hypothesis, truth_transform, hypothesis_transform, **kwargs File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 192, in compute_measures H, S, D, I = _get_operation_counts(truth, hypothesis) File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 273, in _get_operation_counts editops = Levenshtein.editops(source_string, destination_string) MemoryError` My system has more than 10GB of available RAM. Looking at the code, I think that it could be related to the way jiwer does the calculation, as it is pasting all the sentences in a single string before calling Levenshtein editops function.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2078/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2078/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2077
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2077/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2077/comments
https://api.github.com/repos/huggingface/datasets/issues/2077/events
https://github.com/huggingface/datasets/pull/2077
834,649,536
MDExOlB1bGxSZXF1ZXN0NTk1NDI0MTYw
2,077
Bump huggingface_hub version
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "🔥 " ]
2021-03-18T10:54:34
2021-03-18T11:33:26
2021-03-18T11:33:26
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2077", "html_url": "https://github.com/huggingface/datasets/pull/2077", "diff_url": "https://github.com/huggingface/datasets/pull/2077.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2077.patch", "merged_at": "2021-03-18T11:33:26" }
`0.0.2 => 0.0.6`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2077/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2077/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2076
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2076/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2076/comments
https://api.github.com/repos/huggingface/datasets/issues/2076/events
https://github.com/huggingface/datasets/issues/2076
834,445,296
MDU6SXNzdWU4MzQ0NDUyOTY=
2,076
Issue: Dataset download error
{ "login": "XuhuiZhou", "id": 20436061, "node_id": "MDQ6VXNlcjIwNDM2MDYx", "avatar_url": "https://avatars.githubusercontent.com/u/20436061?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XuhuiZhou", "html_url": "https://github.com/XuhuiZhou", "followers_url": "https://api.github.com/users/XuhuiZhou/followers", "following_url": "https://api.github.com/users/XuhuiZhou/following{/other_user}", "gists_url": "https://api.github.com/users/XuhuiZhou/gists{/gist_id}", "starred_url": "https://api.github.com/users/XuhuiZhou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XuhuiZhou/subscriptions", "organizations_url": "https://api.github.com/users/XuhuiZhou/orgs", "repos_url": "https://api.github.com/users/XuhuiZhou/repos", "events_url": "https://api.github.com/users/XuhuiZhou/events{/privacy}", "received_events_url": "https://api.github.com/users/XuhuiZhou/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
[ "Hi @XuhuiZhou, thanks for reporting this issue. \r\n\r\nIndeed, the old links are no longer valid (404 Not Found error), and the script must be updated with the new links to Google Drive.", "It would be nice to update the urls indeed !\r\n\r\nTo do this, you just need to replace the urls in `iwslt2017.py` and th...
2021-03-18T06:36:06
2021-03-22T11:52:31
null
NONE
null
null
null
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2076/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2076/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2075
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2075/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2075/comments
https://api.github.com/repos/huggingface/datasets/issues/2075/events
https://github.com/huggingface/datasets/issues/2075
834,301,246
MDU6SXNzdWU4MzQzMDEyNDY=
2,075
ConnectionError: Couldn't reach common_voice.py
{ "login": "LifaSun", "id": 6188893, "node_id": "MDQ6VXNlcjYxODg4OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6188893?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LifaSun", "html_url": "https://github.com/LifaSun", "followers_url": "https://api.github.com/users/LifaSun/followers", "following_url": "https://api.github.com/users/LifaSun/following{/other_user}", "gists_url": "https://api.github.com/users/LifaSun/gists{/gist_id}", "starred_url": "https://api.github.com/users/LifaSun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LifaSun/subscriptions", "organizations_url": "https://api.github.com/users/LifaSun/orgs", "repos_url": "https://api.github.com/users/LifaSun/repos", "events_url": "https://api.github.com/users/LifaSun/events{/privacy}", "received_events_url": "https://api.github.com/users/LifaSun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @LifaSun, thanks for reporting this issue.\r\n\r\nSometimes, GitHub has some connectivity problems. Could you confirm that the problem persists?", "@albertvillanova Thanks! It works well now. " ]
2021-03-18T01:19:06
2021-03-20T10:29:41
2021-03-20T10:29:41
NONE
null
null
null
When I run: from datasets import load_dataset, load_metric common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation") common_voice_test = load_dataset("common_voice", "zh-CN", split="test") Got: ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/master/datasets/common_voice/common_voice.py Version: 1.4.1 Thanks! @lhoestq @LysandreJik @thomwolf
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2075/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2075/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2074
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2074/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2074/comments
https://api.github.com/repos/huggingface/datasets/issues/2074/events
https://github.com/huggingface/datasets/pull/2074
834,268,463
MDExOlB1bGxSZXF1ZXN0NTk1MTIzMjYw
2,074
Fix size categories in YAML Tags
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.\r\n\r\nWe can also update the task lists here: https://github.com/huggingface/dat...
2021-03-18T00:02:36
2021-03-23T17:11:10
2021-03-23T17:11:10
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2074", "html_url": "https://github.com/huggingface/datasets/pull/2074", "diff_url": "https://github.com/huggingface/datasets/pull/2074.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2074.patch", "merged_at": "2021-03-23T17:11:09" }
This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also. This PR also adds a couple of infos that I found missing. The code for generating this: ```python for dataset in sorted(os.listdir('./datasets/')): if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']: infos = {} stats = {} st = '' with open(f'datasets/{dataset}/README.md') as f: d = f.read() start_dash = d.find('---') + 3 end_dash = d[start_dash:].find('---') + 3 rest_text = d[end_dash + 3:] try: full_yaml = OmegaConf.create(d[start_dash:end_dash]) readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True) except Exception as e: print(e) continue try: with open(f'datasets/{dataset}/dataset_infos.json') as f: data = json.load(f) except Exception as e: print(e) continue # Skip those without infos. done_set = set([]) num_keys = len(data.keys()) for keys in data: # dataset = load_dataset('opus100', f'{dirs}') total = 0 for split in data[keys]['splits']: total = total + data[keys]['splits'][split]['num_examples'] if total < 1000: st += "- n<1K" + '\n' infos[keys] = ["n<1K"] elif total >= 1000 and total < 10000: infos[keys] = ["1K<n<10K"] elif total >= 10000 and total < 100000: infos[keys] = ["10K<n<100K"] elif total >= 100000 and total < 1000000: infos[keys] = ["100K<n<1M"] elif total >= 1000000 and total < 10000000: infos[keys] = ["1M<n<10M"] elif total >= 10000000 and total < 100000000: infos[keys] = ["10M<n<100M"] elif total >= 100000000 and total < 1000000000: infos[keys] = ["100M<n<1B"] elif total >= 1000000000 and total < 10000000000: infos[keys] = ["1B<n<10B"] elif total >= 10000000000 and total < 100000000000: infos[keys] = ["10B<n<100B"] elif total >= 100000000000 and total < 1000000000000: infos[keys] = ["100B<n<1T"] else: infos[keys] = ["n>1T"] done_set = done_set.union(infos[keys]) if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos): print('-' * 30) print(done_set) print(f"Changing Full YAML for {dataset}") print(OmegaConf.to_yaml(full_yaml)) if len(done_set) == 1: full_yaml['size_categories'] = list(done_set) else: full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])]) full_yaml_string = OmegaConf.to_yaml(full_yaml) print('-' * 30) print(full_yaml_string) inp = input('Do you wish to continue?(Y/N)') if inp == 'Y': with open(f'./datasets/{dataset}/README.md', 'w') as f: f.write('---\n') f.write(full_yaml_string) f.write('---') f.write(rest_text) else: break ``` Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app. EDIT: It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency. EDIT: I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2074/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2074/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2073
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2073/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2073/comments
https://api.github.com/repos/huggingface/datasets/issues/2073/events
https://github.com/huggingface/datasets/pull/2073
834,192,501
MDExOlB1bGxSZXF1ZXN0NTk1MDYyMzQ2
2,073
Fixes check of TF_AVAILABLE and TORCH_AVAILABLE
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-17T21:28:53
2021-03-18T09:09:25
2021-03-18T09:09:24
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2073", "html_url": "https://github.com/huggingface/datasets/pull/2073", "diff_url": "https://github.com/huggingface/datasets/pull/2073.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2073.patch", "merged_at": "2021-03-18T09:09:24" }
# What is this PR doing This PR implements the checks if `Tensorflow` and `Pytorch` are available the same way as `transformers` does it. I added the additional checks for the different `Tensorflow` and `torch` versions. #2068
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2073/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2073/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2072
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2072/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2072/comments
https://api.github.com/repos/huggingface/datasets/issues/2072/events
https://github.com/huggingface/datasets/pull/2072
834,054,837
MDExOlB1bGxSZXF1ZXN0NTk0OTQ5NjA4
2,072
Fix docstring issues
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
[ "I think I will stop pushing to this PR, so that it can me merged for today release. \r\n\r\nI will open another PR for further fixing docs.\r\n\r\nDo you agree, @lhoestq ?", "Sounds good thanks !" ]
2021-03-17T18:13:44
2021-03-24T08:20:57
2021-03-18T12:41:21
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2072", "html_url": "https://github.com/huggingface/datasets/pull/2072", "diff_url": "https://github.com/huggingface/datasets/pull/2072.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2072.patch", "merged_at": "2021-03-18T12:41:21" }
Fix docstring issues.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2072/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2072/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2071
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2071/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2071/comments
https://api.github.com/repos/huggingface/datasets/issues/2071/events
https://github.com/huggingface/datasets/issues/2071
833,950,824
MDU6SXNzdWU4MzM5NTA4MjQ=
2,071
Multiprocessing is slower than single process
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
[ "dupe of #1992" ]
2021-03-17T16:08:58
2021-03-18T09:10:23
2021-03-18T09:10:23
CONTRIBUTOR
null
null
null
```python # benchmark_filter.py import logging import sys import time from datasets import load_dataset, set_caching_enabled if __name__ == "__main__": set_caching_enabled(False) logging.basicConfig(level=logging.DEBUG) bc = load_dataset("bookcorpus") now = time.time() try: bc["train"].filter(lambda x: len(x["text"]) < 64, num_proc=int(sys.argv[1])) except Exception as e: print(f"cancelled: {e}") elapsed = time.time() - now print(elapsed) ``` Running `python benchmark_filter.py 1` (20min+) is faster than `python benchmark_filter.py 2` (2hrs+)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2071/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2071/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2070
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2070/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2070/comments
https://api.github.com/repos/huggingface/datasets/issues/2070/events
https://github.com/huggingface/datasets/issues/2070
833,799,035
MDU6SXNzdWU4MzM3OTkwMzU=
2,070
ArrowInvalid issue for squad v2 dataset
{ "login": "MichaelYxWang", "id": 29818977, "node_id": "MDQ6VXNlcjI5ODE4OTc3", "avatar_url": "https://avatars.githubusercontent.com/u/29818977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MichaelYxWang", "html_url": "https://github.com/MichaelYxWang", "followers_url": "https://api.github.com/users/MichaelYxWang/followers", "following_url": "https://api.github.com/users/MichaelYxWang/following{/other_user}", "gists_url": "https://api.github.com/users/MichaelYxWang/gists{/gist_id}", "starred_url": "https://api.github.com/users/MichaelYxWang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MichaelYxWang/subscriptions", "organizations_url": "https://api.github.com/users/MichaelYxWang/orgs", "repos_url": "https://api.github.com/users/MichaelYxWang/repos", "events_url": "https://api.github.com/users/MichaelYxWang/events{/privacy}", "received_events_url": "https://api.github.com/users/MichaelYxWang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi ! This error happens when you use `map` in batched mode and then your function doesn't return the same number of values per column.\r\n\r\nIndeed since you're using `map` in batched mode, `prepare_validation_features` must take a batch as input (i.e. a dictionary of multiple rows of the dataset), and return a b...
2021-03-17T13:51:49
2021-08-04T17:57:16
2021-08-04T17:57:16
NONE
null
null
null
Hello, I am using the huggingface official question answering example notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb). In the prepare_validation_features function, I made some modifications to tokenize a new set of quesions with the original contexts and save them in three different list called candidate_input_dis, candidate_attetion_mask and candidate_token_type_ids. When I try to run the next cell for dataset.map, I got the following error: `ArrowInvalid: Column 1 named candidate_attention_mask expected length 1180 but got length 1178` My code is as follows: ``` def generate_candidate_questions(examples): val_questions = examples["question"] candididate_questions = random.sample(datasets["train"]["question"], len(val_questions)) candididate_questions = [x[:max_length] for x in candididate_questions] return candididate_questions def prepare_validation_features(examples, use_mixing=False): pad_on_right = tokenizer.padding_side == "right" tokenized_examples = tokenizer( examples["question" if pad_on_right else "context"], examples["context" if pad_on_right else "question"], truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) if use_mixing: candidate_questions = generate_candidate_questions(examples) tokenized_candidates = tokenizer( candidate_questions if pad_on_right else examples["context"], examples["context"] if pad_on_right else candidate_questions, truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") tokenized_examples["example_id"] = [] if use_mixing: tokenized_examples["candidate_input_ids"] = tokenized_candidates["input_ids"] tokenized_examples["candidate_attention_mask"] = tokenized_candidates["attention_mask"] tokenized_examples["candidate_token_type_ids"] = tokenized_candidates["token_type_ids"] for i in range(len(tokenized_examples["input_ids"])): sequence_ids = tokenized_examples.sequence_ids(i) context_index = 1 if pad_on_right else 0 sample_index = sample_mapping[i] tokenized_examples["example_id"].append(examples["id"][sample_index]) tokenized_examples["offset_mapping"][i] = [ (o if sequence_ids[k] == context_index else None) for k, o in enumerate(tokenized_examples["offset_mapping"][i]) ] return tokenized_examples validation_features = datasets["validation"].map( lambda xs: prepare_validation_features(xs, True), batched=True, remove_columns=datasets["validation"].column_names ) ``` I guess this might happen because of the batched=True. I see similar issues in this repo related to arrow table length mismatch error, but in their cases, the numbers vary a lot. In my case, this error always happens when the expected length and unexpected length are very close. Thanks for the help!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2070/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2070/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2069
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2069/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2069/comments
https://api.github.com/repos/huggingface/datasets/issues/2069/events
https://github.com/huggingface/datasets/pull/2069
833,768,926
MDExOlB1bGxSZXF1ZXN0NTk0NzA5ODYw
2,069
Add and fix docstring for NamedSplit
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Maybe we should add some other split classes?" ]
2021-03-17T13:19:28
2021-03-18T10:27:40
2021-03-18T10:27:40
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2069", "html_url": "https://github.com/huggingface/datasets/pull/2069", "diff_url": "https://github.com/huggingface/datasets/pull/2069.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2069.patch", "merged_at": "2021-03-18T10:27:40" }
Add and fix docstring for `NamedSplit`, which was missing.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2069/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2069/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2068
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2068/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2068/comments
https://api.github.com/repos/huggingface/datasets/issues/2068/events
https://github.com/huggingface/datasets/issues/2068
833,602,832
MDU6SXNzdWU4MzM2MDI4MzI=
2,068
PyTorch not available error on SageMaker GPU docker though it is installed
{ "login": "sivakhno", "id": 1651457, "node_id": "MDQ6VXNlcjE2NTE0NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1651457?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sivakhno", "html_url": "https://github.com/sivakhno", "followers_url": "https://api.github.com/users/sivakhno/followers", "following_url": "https://api.github.com/users/sivakhno/following{/other_user}", "gists_url": "https://api.github.com/users/sivakhno/gists{/gist_id}", "starred_url": "https://api.github.com/users/sivakhno/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sivakhno/subscriptions", "organizations_url": "https://api.github.com/users/sivakhno/orgs", "repos_url": "https://api.github.com/users/sivakhno/repos", "events_url": "https://api.github.com/users/sivakhno/events{/privacy}", "received_events_url": "https://api.github.com/users/sivakhno/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @philschmid ", "Hey @sivakhno,\r\n\r\nhow does your `requirements.txt` look like to install the `datasets` library and which version of it are you running? Can you try to install `datasets>=1.4.0`", "Hi @philschmid - thanks for suggestion. I am using `datasets==1.4.1`. \r\nI have also tried using `torch=1.6...
2021-03-17T10:04:27
2021-06-14T04:47:30
2021-06-14T04:47:30
NONE
null
null
null
I get en error when running data loading using SageMaker SDK ``` File "main.py", line 34, in <module> run_training() File "main.py", line 25, in run_training dm.setup('fit') File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(*args, **kwargs) File "/opt/ml/code/data_module.py", line 103, in setup self.dataset[split].set_format(type="torch", columns=self.columns) File "/opt/conda/lib/python3.6/site-packages/datasets/fingerprint.py", line 337, in wrapper out = func(self, *args, **kwargs) File "/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 995, in set_format _ = get_formatter(type, **format_kwargs) File "/opt/conda/lib/python3.6/site-packages/datasets/formatting/__init__.py", line 114, in get_formatter raise _FORMAT_TYPES_ALIASES_UNAVAILABLE[format_type] ValueError: PyTorch needs to be installed to be able to return PyTorch tensors. ``` when trying to execute dataset loading using this notebook https://github.com/PyTorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb, specifically lines ``` self.columns = [c for c in self.dataset[split].column_names if c in self.loader_columns] self.dataset[split].set_format(type="torch", columns=self.columns) ``` The SageMaker docker image used is 763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.4.0-gpu-py3 . By running container interactively I have checked that torch loading completes successfully by executing `https://github.com/huggingface/datasets/blob/master/src/datasets/config.py#L39`. Also as a first line in the data loading module I have ``` import os os.environ["USE_TF"] = "0" os.environ["USE_TORCH"] = "1" ```` But unfortunately the error stills persists. Any suggestions would be appreciated as I am stack. Many Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2068/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2068/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2067
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2067/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2067/comments
https://api.github.com/repos/huggingface/datasets/issues/2067/events
https://github.com/huggingface/datasets/issues/2067
833,559,940
MDU6SXNzdWU4MzM1NTk5NDA=
2,067
Multiprocessing windows error
{ "login": "flozi00", "id": 47894090, "node_id": "MDQ6VXNlcjQ3ODk0MDkw", "avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flozi00", "html_url": "https://github.com/flozi00", "followers_url": "https://api.github.com/users/flozi00/followers", "following_url": "https://api.github.com/users/flozi00/following{/other_user}", "gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}", "starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flozi00/subscriptions", "organizations_url": "https://api.github.com/users/flozi00/orgs", "repos_url": "https://api.github.com/users/flozi00/repos", "events_url": "https://api.github.com/users/flozi00/events{/privacy}", "received_events_url": "https://api.github.com/users/flozi00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi ! Thanks for reporting.\r\nThis looks like a bug, could you try to provide a minimal code example that reproduces the issue ? This would be very helpful !\r\n\r\nOtherwise I can try to run the wav2vec2 code above on my side but probably not this week..", "```\r\nfrom datasets import load_dataset\r\n\r\ndatase...
2021-03-17T09:12:28
2021-08-04T17:59:08
2021-08-04T17:59:08
CONTRIBUTOR
null
null
null
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log crashes into an loop
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2067/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2067/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2066
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2066/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2066/comments
https://api.github.com/repos/huggingface/datasets/issues/2066/events
https://github.com/huggingface/datasets/pull/2066
833,480,551
MDExOlB1bGxSZXF1ZXN0NTk0NDcwMjEz
2,066
Fix docstring rendering of Dataset/DatasetDict.from_csv args
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-17T07:23:10
2021-03-17T09:21:21
2021-03-17T09:21:21
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2066", "html_url": "https://github.com/huggingface/datasets/pull/2066", "diff_url": "https://github.com/huggingface/datasets/pull/2066.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2066.patch", "merged_at": "2021-03-17T09:21:21" }
Fix the docstring rendering of Dataset/DatasetDict.from_csv args.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2066/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2066/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2065
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2065/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2065/comments
https://api.github.com/repos/huggingface/datasets/issues/2065/events
https://github.com/huggingface/datasets/issues/2065
833,291,432
MDU6SXNzdWU4MzMyOTE0MzI=
2,065
Only user permission of saved cache files, not group
{ "login": "lorr1", "id": 57237365, "node_id": "MDQ6VXNlcjU3MjM3MzY1", "avatar_url": "https://avatars.githubusercontent.com/u/57237365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lorr1", "html_url": "https://github.com/lorr1", "followers_url": "https://api.github.com/users/lorr1/followers", "following_url": "https://api.github.com/users/lorr1/following{/other_user}", "gists_url": "https://api.github.com/users/lorr1/gists{/gist_id}", "starred_url": "https://api.github.com/users/lorr1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lorr1/subscriptions", "organizations_url": "https://api.github.com/users/lorr1/orgs", "repos_url": "https://api.github.com/users/lorr1/repos", "events_url": "https://api.github.com/users/lorr1/events{/privacy}", "received_events_url": "https://api.github.com/users/lorr1/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6...
closed
false
null
[]
[ "Hi ! Thanks for reporting.\r\n\r\nCurrently there's no way to specify this.\r\n\r\nWhen loading/processing a dataset, the arrow file is written using a temporary file. Then once writing is finished, it's moved to the cache directory (using `shutil.move` [here](https://github.com/huggingface/datasets/blob/f6b8251eb...
2021-03-17T00:20:22
2023-03-31T12:17:06
2021-05-10T06:45:29
NONE
null
null
null
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you know any ways around this or a way to correctly set the permissions?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2065/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2065/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2064
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2064/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2064/comments
https://api.github.com/repos/huggingface/datasets/issues/2064/events
https://github.com/huggingface/datasets/pull/2064
833,002,360
MDExOlB1bGxSZXF1ZXN0NTk0MDczOTQ1
2,064
Fix ted_talks_iwslt version error
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-16T16:43:45
2021-03-16T18:00:08
2021-03-16T18:00:08
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2064", "html_url": "https://github.com/huggingface/datasets/pull/2064", "diff_url": "https://github.com/huggingface/datasets/pull/2064.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2064.patch", "merged_at": "2021-03-16T18:00:07" }
This PR fixes the bug where the version argument would be passed twice if the dataset configuration was created on the fly. Fixes #2059
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2064/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2064/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2063
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2063/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2063/comments
https://api.github.com/repos/huggingface/datasets/issues/2063/events
https://github.com/huggingface/datasets/pull/2063
832,993,705
MDExOlB1bGxSZXF1ZXN0NTk0MDY2NzI5
2,063
[Common Voice] Adapt dataset script so that no manual data download is actually needed
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-16T16:33:44
2021-03-17T09:42:52
2021-03-17T09:42:37
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2063", "html_url": "https://github.com/huggingface/datasets/pull/2063", "diff_url": "https://github.com/huggingface/datasets/pull/2063.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2063.patch", "merged_at": "2021-03-17T09:42:37" }
This PR changes the dataset script so that no manual data dir is needed anymore.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2063/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2063/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2062
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2062/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2062/comments
https://api.github.com/repos/huggingface/datasets/issues/2062/events
https://github.com/huggingface/datasets/pull/2062
832,625,483
MDExOlB1bGxSZXF1ZXN0NTkzNzUyNTMz
2,062
docs: fix missing quotation
{ "login": "neal2018", "id": 46561493, "node_id": "MDQ6VXNlcjQ2NTYxNDkz", "avatar_url": "https://avatars.githubusercontent.com/u/46561493?v=4", "gravatar_id": "", "url": "https://api.github.com/users/neal2018", "html_url": "https://github.com/neal2018", "followers_url": "https://api.github.com/users/neal2018/followers", "following_url": "https://api.github.com/users/neal2018/following{/other_user}", "gists_url": "https://api.github.com/users/neal2018/gists{/gist_id}", "starred_url": "https://api.github.com/users/neal2018/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neal2018/subscriptions", "organizations_url": "https://api.github.com/users/neal2018/orgs", "repos_url": "https://api.github.com/users/neal2018/repos", "events_url": "https://api.github.com/users/neal2018/events{/privacy}", "received_events_url": "https://api.github.com/users/neal2018/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-16T10:07:54
2021-03-17T09:21:57
2021-03-17T09:21:57
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2062", "html_url": "https://github.com/huggingface/datasets/pull/2062", "diff_url": "https://github.com/huggingface/datasets/pull/2062.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2062.patch", "merged_at": "2021-03-17T09:21:56" }
The json code misses a quote
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2062/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2062/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2061
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2061/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2061/comments
https://api.github.com/repos/huggingface/datasets/issues/2061/events
https://github.com/huggingface/datasets/issues/2061
832,596,228
MDU6SXNzdWU4MzI1OTYyMjg=
2,061
Cannot load udpos subsets from xtreme dataset using load_dataset()
{ "login": "adzcodez", "id": 55791365, "node_id": "MDQ6VXNlcjU1NzkxMzY1", "avatar_url": "https://avatars.githubusercontent.com/u/55791365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adzcodez", "html_url": "https://github.com/adzcodez", "followers_url": "https://api.github.com/users/adzcodez/followers", "following_url": "https://api.github.com/users/adzcodez/following{/other_user}", "gists_url": "https://api.github.com/users/adzcodez/gists{/gist_id}", "starred_url": "https://api.github.com/users/adzcodez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adzcodez/subscriptions", "organizations_url": "https://api.github.com/users/adzcodez/orgs", "repos_url": "https://api.github.com/users/adzcodez/repos", "events_url": "https://api.github.com/users/adzcodez/events{/privacy}", "received_events_url": "https://api.github.com/users/adzcodez/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
[ "@lhoestq Adding \"_\" to the class labels in the dataset script will fix the issue.\r\n\r\nThe bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.", "Hi ! Thanks for reporting @adzcodez \r\n\r\n\r\n> @lhoestq Adding \"_\" to the class labels in the dataset scr...
2021-03-16T09:32:13
2021-06-18T11:54:11
2021-06-18T11:54:10
NONE
null
null
null
Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and faced the same error. Reprex is: `from datasets import load_dataset ` `dataset = load_dataset('xtreme', 'udpos.English')` The error is: `KeyError: '_'` The full traceback is: KeyError Traceback (most recent call last) <ipython-input-5-7181359ea09d> in <module> 1 from datasets import load_dataset ----> 2 dataset = load_dataset('xtreme', 'udpos.English') ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs) 738 739 # Download and prepare data --> 740 builder_instance.download_and_prepare( 741 download_config=download_config, 742 download_mode=download_mode, ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 576 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 577 if not downloaded_from_gcs: --> 578 self._download_and_prepare( 579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 580 ) ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 654 try: 655 # Prepare split will record examples associated to the split --> 656 self._prepare_split(split_generator, **prepare_split_kwargs) 657 except OSError as e: 658 raise OSError( ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _prepare_split(self, split_generator) 977 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose 978 ): --> 979 example = self.info.features.encode_example(record) 980 writer.write(example) 981 finally: ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example) 946 def encode_example(self, example): 947 example = cast_to_python_objects(example) --> 948 return encode_nested_example(self, example) 949 950 def encode_batch(self, batch): ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj) 840 # Nested structures: we allow dict, list/tuples, sequences 841 if isinstance(schema, dict): --> 842 return { 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 844 } ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in <dictcomp>(.0) 841 if isinstance(schema, dict): 842 return { --> 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 844 } 845 elif isinstance(schema, (list, tuple)): ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj) 868 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks 869 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)): --> 870 return schema.encode_example(obj) 871 # Other object should be directly convertible to a native Arrow type (like Translation and Translation) 872 return obj ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example_data) 647 # If a string is given, convert to associated integer 648 if isinstance(example_data, str): --> 649 example_data = self.str2int(example_data) 650 651 # Allowing -1 to mean no label. ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in str2int(self, values) 605 if value not in self._str2int: 606 value = value.strip() --> 607 output.append(self._str2int[str(value)]) 608 else: 609 # No names provided, try to integerize KeyError: '_'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2061/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2061/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2060
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2060/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2060/comments
https://api.github.com/repos/huggingface/datasets/issues/2060/events
https://github.com/huggingface/datasets/pull/2060
832,588,591
MDExOlB1bGxSZXF1ZXN0NTkzNzIxNzcx
2,060
Filtering refactor
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github...
[ "I thought at first that the multiproc test was not relevant now that we do stuff only in memory, but I think there's something that's actually broken, my tiny benchmark on bookcorpus runs forever (2hrs+) when I add `num_proc=4` as a kwarg, will investigate 👀 \r\n\r\nI'm not familiar with the caching you describe ...
2021-03-16T09:23:30
2023-09-24T09:52:57
2021-10-13T09:09:03
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2060", "html_url": "https://github.com/huggingface/datasets/pull/2060", "diff_url": "https://github.com/huggingface/datasets/pull/2060.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2060.patch", "merged_at": null }
fix https://github.com/huggingface/datasets/issues/2032 benchmarking is somewhat inconclusive, currently running on `book_corpus` with: ```python bc = load_dataset("bookcorpus") now = time.time() bc.filter(lambda x: len(x["text"]) < 64) elapsed = time.time() - now print(elapsed) ``` this branch does it in 233 seconds, master in 1409 seconds.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2060/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2060/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2059
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2059/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2059/comments
https://api.github.com/repos/huggingface/datasets/issues/2059/events
https://github.com/huggingface/datasets/issues/2059
832,579,156
MDU6SXNzdWU4MzI1NzkxNTY=
2,059
Error while following docs to load the `ted_talks_iwslt` dataset
{ "login": "ekdnam", "id": 40426312, "node_id": "MDQ6VXNlcjQwNDI2MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/40426312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekdnam", "html_url": "https://github.com/ekdnam", "followers_url": "https://api.github.com/users/ekdnam/followers", "following_url": "https://api.github.com/users/ekdnam/following{/other_user}", "gists_url": "https://api.github.com/users/ekdnam/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekdnam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekdnam/subscriptions", "organizations_url": "https://api.github.com/users/ekdnam/orgs", "repos_url": "https://api.github.com/users/ekdnam/repos", "events_url": "https://api.github.com/users/ekdnam/events{/privacy}", "received_events_url": "https://api.github.com/users/ekdnam/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
[ "@skyprince999 as you authored the PR for this dataset, any comments?", "This has been fixed in #2064 by @mariosasko (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)" ]
2021-03-16T09:12:19
2021-03-16T18:00:31
2021-03-16T18:00:07
NONE
null
null
null
I am currently trying to load the `ted_talks_iwslt` dataset into google colab. The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so. ```python dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014") ``` Executing it results in the error attached below. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-7dcc67154ef9> in <module>() ----> 1 dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014") 4 frames /usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs) 730 hash=hash, 731 features=features, --> 732 **config_kwargs, 733 ) 734 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, writer_batch_size, *args, **kwargs) 927 928 def __init__(self, *args, writer_batch_size=None, **kwargs): --> 929 super(GeneratorBasedBuilder, self).__init__(*args, **kwargs) 930 # Batch size used by the ArrowWriter 931 # It defines the number of samples that are kept in memory before writing them /usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs) 241 name, 242 custom_features=features, --> 243 **config_kwargs, 244 ) 245 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs) 337 if "version" not in config_kwargs and hasattr(self, "VERSION") and self.VERSION: 338 config_kwargs["version"] = self.VERSION --> 339 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs) 340 341 # otherwise use the config_kwargs to overwrite the attributes /root/.cache/huggingface/modules/datasets_modules/datasets/ted_talks_iwslt/024d06b1376b361e59245c5878ab8acf9a7576d765f2d0077f61751158e60914/ted_talks_iwslt.py in __init__(self, language_pair, year, **kwargs) 219 description=description, 220 version=datasets.Version("1.1.0", ""), --> 221 **kwargs, 222 ) 223 TypeError: __init__() got multiple values for keyword argument 'version' ``` How to resolve this? PS: Thanks a lot @huggingface team for creating this great library!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2059/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2059/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2058
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2058/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2058/comments
https://api.github.com/repos/huggingface/datasets/issues/2058/events
https://github.com/huggingface/datasets/issues/2058
832,159,844
MDU6SXNzdWU4MzIxNTk4NDQ=
2,058
Is it possible to convert a `tfds` to HuggingFace `dataset`?
{ "login": "abarbosa94", "id": 6608232, "node_id": "MDQ6VXNlcjY2MDgyMzI=", "avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abarbosa94", "html_url": "https://github.com/abarbosa94", "followers_url": "https://api.github.com/users/abarbosa94/followers", "following_url": "https://api.github.com/users/abarbosa94/following{/other_user}", "gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}", "starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions", "organizations_url": "https://api.github.com/users/abarbosa94/orgs", "repos_url": "https://api.github.com/users/abarbosa94/repos", "events_url": "https://api.github.com/users/abarbosa94/events{/privacy}", "received_events_url": "https://api.github.com/users/abarbosa94/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi! You can either save the TF dataset to one of the formats supported by datasets (`parquet`, `csv`, `json`, ...) or pass a generator function to `Dataset.from_generator` that yields its examples." ]
2021-03-15T20:18:47
2023-07-25T16:47:40
2023-07-25T16:47:40
CONTRIBUTOR
null
null
null
I was having some weird bugs with `C4`dataset version of HuggingFace, so I decided to try to download `C4`from `tfds`. I would like to know if it is possible to convert a tfds dataset to HuggingFace dataset format :) I can also open a new issue reporting the bug I'm receiving with `datasets.load_dataset('c4','en')` in the future if you think that it would be useful. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2058/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2058/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2057
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2057/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2057/comments
https://api.github.com/repos/huggingface/datasets/issues/2057/events
https://github.com/huggingface/datasets/pull/2057
832,120,522
MDExOlB1bGxSZXF1ZXN0NTkzMzMzMjM0
2,057
update link to ZEST dataset
{ "login": "matt-peters", "id": 619844, "node_id": "MDQ6VXNlcjYxOTg0NA==", "avatar_url": "https://avatars.githubusercontent.com/u/619844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/matt-peters", "html_url": "https://github.com/matt-peters", "followers_url": "https://api.github.com/users/matt-peters/followers", "following_url": "https://api.github.com/users/matt-peters/following{/other_user}", "gists_url": "https://api.github.com/users/matt-peters/gists{/gist_id}", "starred_url": "https://api.github.com/users/matt-peters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/matt-peters/subscriptions", "organizations_url": "https://api.github.com/users/matt-peters/orgs", "repos_url": "https://api.github.com/users/matt-peters/repos", "events_url": "https://api.github.com/users/matt-peters/events{/privacy}", "received_events_url": "https://api.github.com/users/matt-peters/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-15T19:22:57
2021-03-16T17:06:28
2021-03-16T17:06:28
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2057", "html_url": "https://github.com/huggingface/datasets/pull/2057", "diff_url": "https://github.com/huggingface/datasets/pull/2057.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2057.patch", "merged_at": "2021-03-16T17:06:28" }
Updating the link as the original one is no longer working.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2057/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2057/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2056
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2056/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2056/comments
https://api.github.com/repos/huggingface/datasets/issues/2056/events
https://github.com/huggingface/datasets/issues/2056
831,718,397
MDU6SXNzdWU4MzE3MTgzOTc=
2,056
issue with opus100/en-fr dataset
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@lhoestq I also deleted the cache and redownload the file and still the same issue, I appreciate any help on this. thanks ", "Here please find the minimal code to reproduce the issue @lhoestq note this only happens with MT5TokenizerFast\r\n\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers impor...
2021-03-15T11:32:42
2021-03-16T15:49:00
2021-03-16T15:48:59
NONE
null
null
null
Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advance. ` thread '<unnamed>' panicked at 'index out of bounds: the len is 617 but the index is 617', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/normalizer.rs:382:21 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace 63%|██████████████████████████████████████████████████████████▊ | 626/1000 [00:27<00:16, 22.69ba/s] Traceback (most recent call last): File "run_mlm.py", line 550, in <module> main() File "run_mlm.py", line 412, in main in zip(data_args.dataset_name, data_args.dataset_config_name)] File "run_mlm.py", line 411, in <listcomp> logger) for dataset_name, dataset_config_name\ File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 96, in get_tokenized_dataset load_from_cache_file=not data_args.overwrite_cache, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in map for k, dataset in self.items() File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in <dictcomp> for k, dataset in self.items() File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1309, in map update_data=update_data, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 204, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/fingerprint.py", line 337, in wrapper out = func(self, *args, **kwargs) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1574, in _map_single batch, indices, check_same_num_examples=len(self.list_indexes()) > 0, offset=offset File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1490, in apply_function_on_filtered_inputs function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 89, in tokenize_function return tokenizer(examples[text_column_name], return_special_tokens_mask=True) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2347, in __call__ **kwargs, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2532, in batch_encode_plus **kwargs, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 384, in _batch_encode_plus is_pretokenized=is_split_into_words, pyo3_runtime.PanicException: index out of bounds: the len is 617 but the index is 617 `
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2056/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2056/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2055
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2055/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2055/comments
https://api.github.com/repos/huggingface/datasets/issues/2055/events
https://github.com/huggingface/datasets/issues/2055
831,684,312
MDU6SXNzdWU4MzE2ODQzMTI=
2,055
is there a way to override a dataset object saved with save_to_disk?
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi\r\nYou can rename the arrow file and update the name in `state.json`", "I tried this way, but when there is a mapping process to the dataset, it again uses a random cache name. atm, I am trying to use the following method by setting an exact cache file,\r\n\r\n```\r\n dataset_with_embedding =csv_da...
2021-03-15T10:50:53
2021-03-22T04:06:17
2021-03-22T04:06:17
NONE
null
null
null
At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2055/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2055/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2054
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2054/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2054/comments
https://api.github.com/repos/huggingface/datasets/issues/2054/events
https://github.com/huggingface/datasets/issues/2054
831,597,665
MDU6SXNzdWU4MzE1OTc2NjU=
2,054
Could not find file for ZEST dataset
{ "login": "bhadreshpsavani", "id": 26653468, "node_id": "MDQ6VXNlcjI2NjUzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhadreshpsavani", "html_url": "https://github.com/bhadreshpsavani", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
[ "The zest dataset url was changed (allenai/zest#3) and #2057 should resolve this.", "This has been fixed in #2057 by @matt-peters (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)", "Thanks @lhoestq and @matt-peters ", "I am closing this issue since its ...
2021-03-15T09:11:58
2021-05-03T09:30:24
2021-05-03T09:30:24
CONTRIBUTOR
null
null
null
I am trying to use zest dataset from Allen AI using below code in colab, ``` !pip install -q datasets from datasets import load_dataset dataset = load_dataset("zest") ``` I am getting the following error, ``` Using custom data configuration default Downloading and preparing dataset zest/default (download: 5.53 MiB, generated: 19.96 MiB, post-processed: Unknown size, total: 25.48 MiB) to /root/.cache/huggingface/datasets/zest/default/0.0.0/1f7a230fbfc964d979bbca0f0130fbab3259fce547ee758ad8aa4f9c9bec6cca... --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-6-18dbbc1a4b8a> in <module>() 1 from datasets import load_dataset 2 ----> 3 dataset = load_dataset("zest") 9 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 612 ) 613 elif response is not None and response.status_code == 404: --> 614 raise FileNotFoundError("Couldn't find file at {}".format(url)) 615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 616 raise ConnectionError("Couldn't reach {}".format(url)) FileNotFoundError: Couldn't find file at https://ai2-datasets.s3-us-west-2.amazonaws.com/zest/zest.zip ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2054/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2054/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2053
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2053/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2053/comments
https://api.github.com/repos/huggingface/datasets/issues/2053/events
https://github.com/huggingface/datasets/pull/2053
831,151,728
MDExOlB1bGxSZXF1ZXN0NTkyNTM4ODY2
2,053
Add bAbI QA tasks
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @lhoestq,\r\n\r\nShould I remove the 160 configurations? Is it too much?\r\n\r\nEDIT:\r\nCan you also check the task category? I'm not sure if there is an appropriate tag for the same.", "Thanks for the changes !\r\n\r\n> Should I remove the 160 configurations? Is it too much?\r\n\r\nYea 160 configuration is ...
2021-03-14T13:04:39
2021-03-29T12:41:48
2021-03-29T12:41:48
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2053", "html_url": "https://github.com/huggingface/datasets/pull/2053", "diff_url": "https://github.com/huggingface/datasets/pull/2053.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2053.patch", "merged_at": "2021-03-29T12:41:48" }
- **Name:** *The (20) QA bAbI tasks* - **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.* - **Paper:** [arXiv](https://arxiv.org/pdf/1502.05698.pdf) - **Data:** [Facebook Research Page](https://research.fb.com/downloads/babi/) - **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research. **Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done. Thanks :) ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2053/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2053/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2052
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2052/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2052/comments
https://api.github.com/repos/huggingface/datasets/issues/2052/events
https://github.com/huggingface/datasets/issues/2052
831,135,704
MDU6SXNzdWU4MzExMzU3MDQ=
2,052
Timit_asr dataset repeats examples
{ "login": "fermaat", "id": 7583522, "node_id": "MDQ6VXNlcjc1ODM1MjI=", "avatar_url": "https://avatars.githubusercontent.com/u/7583522?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fermaat", "html_url": "https://github.com/fermaat", "followers_url": "https://api.github.com/users/fermaat/followers", "following_url": "https://api.github.com/users/fermaat/following{/other_user}", "gists_url": "https://api.github.com/users/fermaat/gists{/gist_id}", "starred_url": "https://api.github.com/users/fermaat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fermaat/subscriptions", "organizations_url": "https://api.github.com/users/fermaat/orgs", "repos_url": "https://api.github.com/users/fermaat/repos", "events_url": "https://api.github.com/users/fermaat/events{/privacy}", "received_events_url": "https://api.github.com/users/fermaat/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nthis was fixed by #1995, so you can wait for the next release or install the package directly from the master branch with the following command: \r\n```bash\r\npip install git+https://github.com/huggingface/datasets\r\n```", "Ty!" ]
2021-03-14T11:43:43
2021-03-15T10:37:16
2021-03-15T10:37:16
NONE
null
null
null
Summary When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same Steps to reproduce As an example, on this code there is the text from the training part: Code snippet: ``` from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") timit['train']['text'] #['Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', ``` The same behavior happens for other columns Expected behavior: Different info on the actual timit_asr dataset Actual behavior: When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same. I've checked datasets 1.3 and the rows are different Debug info Streamlit version: (get it with $ streamlit version) Python version: Python 3.6.12 Using Conda? PipEnv? PyEnv? Pex? Using pip OS version: Centos-release-7-9.2009.1.el7.centos.x86_64 Additional information You can check the same behavior on https://huggingface.co/datasets/viewer/?dataset=timit_asr
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2052/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2052/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2051
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2051/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2051/comments
https://api.github.com/repos/huggingface/datasets/issues/2051/events
https://github.com/huggingface/datasets/pull/2051
831,027,021
MDExOlB1bGxSZXF1ZXN0NTkyNDQ2MDU1
2,051
Add MDD Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @lhoestq,\r\n\r\nI have added changes from review.", "Thanks for approving :)" ]
2021-03-14T00:01:05
2021-03-19T11:15:44
2021-03-19T10:31:59
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2051", "html_url": "https://github.com/huggingface/datasets/pull/2051", "diff_url": "https://github.com/huggingface/datasets/pull/2051.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2051.patch", "merged_at": "2021-03-19T10:31:59" }
- **Name:** *MDD Dataset* - **Description:** The Movie Dialog dataset (MDD) is designed to measure how well models can perform at goal and non-goal orientated dialog centered around the topic of movies (question answering, recommendation and discussion), from various movie reviews sources such as MovieLens and OMDb. - **Paper:** [arXiv](https://arxiv.org/pdf/1511.06931.pdf) - **Data:** https://research.fb.com/downloads/babi/ - **Motivation:** This is one of the popular dialog datasets, a part of Facebook Research's "bAbI project". ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass. **Note**: I haven't included the following from the data files: `entities` (the file containing list of all entities in the first three subtasks), `dictionary`(the dictionary of words they use in their models), `movie_kb`(contains the knowledge base of information about the movies, actors and other entities that are mentioned in the dialogs). Please let me know if those are needed, and if yes, should I make separate configurations for them?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2051/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2051/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2050
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2050/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2050/comments
https://api.github.com/repos/huggingface/datasets/issues/2050/events
https://github.com/huggingface/datasets/issues/2050
831,006,551
MDU6SXNzdWU4MzEwMDY1NTE=
2,050
Build custom dataset to fine-tune Wav2Vec2
{ "login": "Omarnabk", "id": 72882909, "node_id": "MDQ6VXNlcjcyODgyOTA5", "avatar_url": "https://avatars.githubusercontent.com/u/72882909?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Omarnabk", "html_url": "https://github.com/Omarnabk", "followers_url": "https://api.github.com/users/Omarnabk/followers", "following_url": "https://api.github.com/users/Omarnabk/following{/other_user}", "gists_url": "https://api.github.com/users/Omarnabk/gists{/gist_id}", "starred_url": "https://api.github.com/users/Omarnabk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Omarnabk/subscriptions", "organizations_url": "https://api.github.com/users/Omarnabk/orgs", "repos_url": "https://api.github.com/users/Omarnabk/repos", "events_url": "https://api.github.com/users/Omarnabk/events{/privacy}", "received_events_url": "https://api.github.com/users/Omarnabk/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
[ "@lhoestq - We could simply use the \"general\" json dataset for this no? ", "Sure you can use the json loader\r\n```python\r\ndata_files = {\"train\": \"path/to/your/train_data.json\", \"test\": \"path/to/your/test_data.json\"}\r\ntrain_dataset = load_dataset(\"json\", data_files=data_files, split=\"train\")\r\n...
2021-03-13T22:01:10
2021-03-15T09:27:28
2021-03-15T09:27:28
NONE
null
null
null
Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2050/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2050/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2049
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2049/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2049/comments
https://api.github.com/repos/huggingface/datasets/issues/2049/events
https://github.com/huggingface/datasets/pull/2049
830,978,687
MDExOlB1bGxSZXF1ZXN0NTkyNDE2MzQ0
2,049
Fix text-classification tags
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "LGTM, thanks for fixing." ]
2021-03-13T19:51:42
2021-03-16T15:47:46
2021-03-16T15:47:46
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2049", "html_url": "https://github.com/huggingface/datasets/pull/2049", "diff_url": "https://github.com/huggingface/datasets/pull/2049.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2049.patch", "merged_at": "2021-03-16T15:47:46" }
There are different tags for text classification right now: `text-classification` and `text_classification`: ![image](https://user-images.githubusercontent.com/29076344/111042457-856bdf00-8463-11eb-93c9-50a30106a1a1.png). This PR fixes it.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2049/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2049/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2048
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2048/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2048/comments
https://api.github.com/repos/huggingface/datasets/issues/2048/events
https://github.com/huggingface/datasets/issues/2048
830,953,431
MDU6SXNzdWU4MzA5NTM0MzE=
2,048
github is not always available - probably need a back up
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-13T18:03:32
2022-04-01T15:27:10
2022-04-01T15:27:10
CONTRIBUTOR
null
null
null
Yesterday morning github wasn't working: ``` :/tmp$ wget https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py--2021-03-12 18:35:59-- https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.109.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected. HTTP request sent, awaiting response... 500 Internal Server Error 2021-03-12 18:36:11 ERROR 500: Internal Server Error. ``` Suggestion: have a failover system and replicate the data on another system and reach there if gh isn't reachable? perhaps gh can be a master and the replicate a slave - so there is only one true source.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2048/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2048/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2047
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2047/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2047/comments
https://api.github.com/repos/huggingface/datasets/issues/2047/events
https://github.com/huggingface/datasets/pull/2047
830,626,430
MDExOlB1bGxSZXF1ZXN0NTkyMTI2NzQ3
2,047
Multilingual dIalogAct benchMark (miam)
{ "login": "eusip", "id": 1551356, "node_id": "MDQ6VXNlcjE1NTEzNTY=", "avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eusip", "html_url": "https://github.com/eusip", "followers_url": "https://api.github.com/users/eusip/followers", "following_url": "https://api.github.com/users/eusip/following{/other_user}", "gists_url": "https://api.github.com/users/eusip/gists{/gist_id}", "starred_url": "https://api.github.com/users/eusip/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eusip/subscriptions", "organizations_url": "https://api.github.com/users/eusip/orgs", "repos_url": "https://api.github.com/users/eusip/repos", "events_url": "https://api.github.com/users/eusip/events{/privacy}", "received_events_url": "https://api.github.com/users/eusip/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello. All aforementioned changes have been made. I've also re-run black on miam.py. :-)", "I will run isort again. Hopefully it resolves the current check_code_quality test failure.", "Once the review period is over, feel free to open a PR to add all the missing information ;)", "Hi! I will follow up right ...
2021-03-12T23:02:55
2021-03-23T10:36:34
2021-03-19T10:47:13
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2047", "html_url": "https://github.com/huggingface/datasets/pull/2047", "diff_url": "https://github.com/huggingface/datasets/pull/2047.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2047.patch", "merged_at": "2021-03-19T10:47:13" }
My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2047/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2047/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2046/comments
https://api.github.com/repos/huggingface/datasets/issues/2046/events
https://github.com/huggingface/datasets/issues/2046
830,423,033
MDU6SXNzdWU4MzA0MjMwMzM=
2,046
add_faisis_index gets very slow when doing it interatively
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think faiss automatically sets the number of threads to use to build the index.\r\nCan you check how many CPU cores are being used when you build the index in `use_own_knowleldge_dataset` as compared to this script ? Are there other programs running (maybe for rank>0) ?", "Hi,\r\n I am running the add_faiss_in...
2021-03-12T20:27:18
2021-03-24T22:29:11
2021-03-24T22:29:11
NONE
null
null
null
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any way to make this process faster? @lhoestq ``` def training_step(self, batch, batch_idx) -> Dict: if (not batch_idx==0) and (batch_idx%5==0): print("******************************************************") ctx_encoder=self.trainer.model.module.module.model.rag.ctx_encoder model_copy =type(ctx_encoder)(self.config_dpr) # get a new instance #this will be load in the CPU model_copy.load_state_dict(ctx_encoder.state_dict()) # copy weights and stuff list_of_gpus = ['cuda:2','cuda:3'] c_dir='/custom/cache/dir' kb_dataset = load_dataset("csv", data_files=[self.custom_config.csv_path], split="train", delimiter="\t", column_names=["title", "text"],cache_dir=c_dir) print(kb_dataset) n=len(list_of_gpus) #nunber of dedicated GPUs kb_list=[kb_dataset.shard(n, i, contiguous=True) for i in range(n)] #kb_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/haha-dir') print(self.trainer.global_rank) dataset_shards = self.re_encode_kb(model_copy.to(device=list_of_gpus[self.trainer.global_rank]),kb_list[self.trainer.global_rank]) output = [None for _ in list_of_gpus] #self.trainer.accelerator_connector.accelerator.barrier("embedding_process") dist.all_gather_object(output, dataset_shards) #This creation and re-initlaization of the new index if (self.trainer.global_rank==0): #saving will be done in the main process combined_dataset = concatenate_datasets(output) passages_path =self.config.passages_path logger.info("saving the dataset with ") #combined_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/MY-Passage') combined_dataset.save_to_disk(passages_path) logger.info("Add faiss index to the dataset that consist of embeddings") embedding_dataset=combined_dataset index = faiss.IndexHNSWFlat(768, 128, faiss.METRIC_INNER_PRODUCT) embedding_dataset.add_faiss_index("embeddings", custom_index=index) embedding_dataset.get_index("embeddings").save(self.config.index_path)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2046/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2045
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2045/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2045/comments
https://api.github.com/repos/huggingface/datasets/issues/2045/events
https://github.com/huggingface/datasets/pull/2045
830,351,527
MDExOlB1bGxSZXF1ZXN0NTkxODc2Mjcz
2,045
Preserve column ordering in Dataset.rename_column
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Not sure why CI isn't triggered.\r\n\r\n@lhoestq Can you please help me with this? ", "I don't know how to trigger it manually, but an empty commit should do the job" ]
2021-03-12T18:26:47
2021-03-16T14:48:05
2021-03-16T14:35:05
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2045", "html_url": "https://github.com/huggingface/datasets/pull/2045", "diff_url": "https://github.com/huggingface/datasets/pull/2045.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2045.patch", "merged_at": "2021-03-16T14:35:05" }
Currently `Dataset.rename_column` doesn't necessarily preserve the order of the columns: ```python >>> from datasets import Dataset >>> d = Dataset.from_dict({'sentences': ["s1", "s2"], 'label': [0, 1]}) >>> d Dataset({ features: ['sentences', 'label'], num_rows: 2 }) >>> d.rename_column('sentences', 'text') Dataset({ features: ['label', 'text'], num_rows: 2 }) ``` This PR fixes this.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2045/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2045/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2044
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2044/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2044/comments
https://api.github.com/repos/huggingface/datasets/issues/2044/events
https://github.com/huggingface/datasets/pull/2044
830,339,905
MDExOlB1bGxSZXF1ZXN0NTkxODY2NzM1
2,044
Add CBT dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @lhoestq,\r\n\r\nI have added changes from the review.", "Thanks for approving @lhoestq " ]
2021-03-12T18:04:19
2021-03-19T11:10:13
2021-03-19T10:29:15
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2044", "html_url": "https://github.com/huggingface/datasets/pull/2044", "diff_url": "https://github.com/huggingface/datasets/pull/2044.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2044.patch", "merged_at": "2021-03-19T10:29:15" }
This PR adds the [CBT Dataset](https://arxiv.org/abs/1511.02301). Note that I have also added the `raw` dataset as a separate configuration. I couldn't find a suitable "task" for it in YAML tags. The dummy files have one example each, as the examples are slightly big. For `raw` dataset, I just used top few lines, because they are entire books and would take up a lot of space. Let me know in case of any issues.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2044/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2044/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2043
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2043/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2043/comments
https://api.github.com/repos/huggingface/datasets/issues/2043/events
https://github.com/huggingface/datasets/pull/2043
830,279,098
MDExOlB1bGxSZXF1ZXN0NTkxODE1ODAz
2,043
Support pickle protocol for dataset splits defined as ReadInstruction
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@lhoestq But we don't perform conversion to a `NamedSplit` if `_split` is not a string which means it **will** be a `ReadInstruction` after reloading.", "Yes right ! I read it wrong.\r\nPerfect then" ]
2021-03-12T16:35:11
2021-03-16T14:25:38
2021-03-16T14:05:05
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2043", "html_url": "https://github.com/huggingface/datasets/pull/2043", "diff_url": "https://github.com/huggingface/datasets/pull/2043.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2043.patch", "merged_at": "2021-03-16T14:05:05" }
Fixes #2022 (+ some style fixes)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2043/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2043/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2042
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2042/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2042/comments
https://api.github.com/repos/huggingface/datasets/issues/2042/events
https://github.com/huggingface/datasets/pull/2042
830,190,276
MDExOlB1bGxSZXF1ZXN0NTkxNzQwNzQ3
2,042
Fix arrow memory checks issue in tests
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-12T14:49:52
2021-03-12T15:04:23
2021-03-12T15:04:22
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2042", "html_url": "https://github.com/huggingface/datasets/pull/2042", "diff_url": "https://github.com/huggingface/datasets/pull/2042.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2042.patch", "merged_at": "2021-03-12T15:04:22" }
The tests currently fail on `master` because the arrow memory verification doesn't return the expected memory evolution when loading an arrow table in memory. From my experiments, the tests fail only when the full test suite is ran. This made me think that maybe some arrow objects from other tests were not freeing their memory until they do and cause the memory verifications to fail in other tests. Collecting the garbage collector before checking the arrow memory usage seems to fix this issue. I added a context manager `assert_arrow_memory_increases` that we can use in tests and that deals with the gc.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2042/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2042/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2041
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2041/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2041/comments
https://api.github.com/repos/huggingface/datasets/issues/2041/events
https://github.com/huggingface/datasets/pull/2041
830,180,803
MDExOlB1bGxSZXF1ZXN0NTkxNzMyNzMw
2,041
Doc2dial update data_infos and data_loaders
{ "login": "songfeng", "id": 2062185, "node_id": "MDQ6VXNlcjIwNjIxODU=", "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/songfeng", "html_url": "https://github.com/songfeng", "followers_url": "https://api.github.com/users/songfeng/followers", "following_url": "https://api.github.com/users/songfeng/following{/other_user}", "gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/songfeng/subscriptions", "organizations_url": "https://api.github.com/users/songfeng/orgs", "repos_url": "https://api.github.com/users/songfeng/repos", "events_url": "https://api.github.com/users/songfeng/events{/privacy}", "received_events_url": "https://api.github.com/users/songfeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-12T14:39:29
2021-03-16T11:09:20
2021-03-16T11:09:20
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2041", "html_url": "https://github.com/huggingface/datasets/pull/2041", "diff_url": "https://github.com/huggingface/datasets/pull/2041.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2041.patch", "merged_at": "2021-03-16T11:09:20" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2041/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2041/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2040
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2040/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2040/comments
https://api.github.com/repos/huggingface/datasets/issues/2040/events
https://github.com/huggingface/datasets/issues/2040
830,169,387
MDU6SXNzdWU4MzAxNjkzODc=
2,040
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk
{ "login": "simonschoe", "id": 53626067, "node_id": "MDQ6VXNlcjUzNjI2MDY3", "avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4", "gravatar_id": "", "url": "https://api.github.com/users/simonschoe", "html_url": "https://github.com/simonschoe", "followers_url": "https://api.github.com/users/simonschoe/followers", "following_url": "https://api.github.com/users/simonschoe/following{/other_user}", "gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}", "starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions", "organizations_url": "https://api.github.com/users/simonschoe/orgs", "repos_url": "https://api.github.com/users/simonschoe/repos", "events_url": "https://api.github.com/users/simonschoe/events{/privacy}", "received_events_url": "https://api.github.com/users/simonschoe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi ! To help me understand the situation, can you print the values of `load_from_disk(PATH_DATA_CLS_A)['train']._indices_data_files` and `load_from_disk(PATH_DATA_CLS_B)['train']._indices_data_files` ?\r\nThey should both have a path to an arrow file\r\n\r\nAlso note that from #2025 concatenating datasets will no...
2021-03-12T14:27:00
2021-08-04T18:00:43
2021-08-04T18:00:43
NONE
null
null
null
Hi there, I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects): ```python concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']]) ``` Yielding the following error: ```python ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk. However datasets' indices [1] come from memory and datasets' indices [0] come from disk. ``` Been trying to solve this for quite some time now. Both `DataDict` have been created by reading in a `csv` via `load_dataset` and subsequently processed using the various `datasets` methods (i.e. filter, map, remove col, rename col). Can't figure out tho... `load_from_disk(PATH_DATA_CLS_A)['train']` yields: ```python Dataset({ features: ['labels', 'text'], num_rows: 785 }) ``` `load_from_disk(PATH_DATA_CLS_B)['train']` yields: ```python Dataset({ features: ['labels', 'text'], num_rows: 3341 }) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2040/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2040/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2039
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2039/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2039/comments
https://api.github.com/repos/huggingface/datasets/issues/2039/events
https://github.com/huggingface/datasets/pull/2039
830,047,652
MDExOlB1bGxSZXF1ZXN0NTkxNjE3ODY3
2,039
Doc2dial rc
{ "login": "songfeng", "id": 2062185, "node_id": "MDQ6VXNlcjIwNjIxODU=", "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/songfeng", "html_url": "https://github.com/songfeng", "followers_url": "https://api.github.com/users/songfeng/followers", "following_url": "https://api.github.com/users/songfeng/following{/other_user}", "gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/songfeng/subscriptions", "organizations_url": "https://api.github.com/users/songfeng/orgs", "repos_url": "https://api.github.com/users/songfeng/repos", "events_url": "https://api.github.com/users/songfeng/events{/privacy}", "received_events_url": "https://api.github.com/users/songfeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-12T11:56:28
2021-03-12T15:32:36
2021-03-12T15:32:36
CONTRIBUTOR
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2039", "html_url": "https://github.com/huggingface/datasets/pull/2039", "diff_url": "https://github.com/huggingface/datasets/pull/2039.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2039.patch", "merged_at": null }
Added fix to handle the last turn that is a user turn.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2039/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2038
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2038/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2038/comments
https://api.github.com/repos/huggingface/datasets/issues/2038/events
https://github.com/huggingface/datasets/issues/2038
830,036,875
MDU6SXNzdWU4MzAwMzY4NzU=
2,038
outdated dataset_infos.json might fail verifications
{ "login": "songfeng", "id": 2062185, "node_id": "MDQ6VXNlcjIwNjIxODU=", "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/songfeng", "html_url": "https://github.com/songfeng", "followers_url": "https://api.github.com/users/songfeng/followers", "following_url": "https://api.github.com/users/songfeng/following{/other_user}", "gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}", "starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/songfeng/subscriptions", "organizations_url": "https://api.github.com/users/songfeng/orgs", "repos_url": "https://api.github.com/users/songfeng/repos", "events_url": "https://api.github.com/users/songfeng/events{/privacy}", "received_events_url": "https://api.github.com/users/songfeng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi ! Thanks for reporting.\r\n\r\nTo update the dataset_infos.json you can run:\r\n```\r\ndatasets-cli test ./datasets/doc2dial --all_configs --save_infos --ignore_verifications\r\n```", "Fixed by #2041, thanks again @songfeng !" ]
2021-03-12T11:41:54
2021-03-16T16:27:40
2021-03-16T16:27:40
CONTRIBUTOR
null
null
null
The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc.. Could you please update this file or point me how to update this file? Thank you.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2038/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2038/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2037
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2037/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2037/comments
https://api.github.com/repos/huggingface/datasets/issues/2037/events
https://github.com/huggingface/datasets/pull/2037
829,919,685
MDExOlB1bGxSZXF1ZXN0NTkxNTA4MTQz
2,037
Fix: Wikipedia - save memory by replacing root.clear with elem.clear
{ "login": "miyamonz", "id": 6331508, "node_id": "MDQ6VXNlcjYzMzE1MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/miyamonz", "html_url": "https://github.com/miyamonz", "followers_url": "https://api.github.com/users/miyamonz/followers", "following_url": "https://api.github.com/users/miyamonz/following{/other_user}", "gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}", "starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions", "organizations_url": "https://api.github.com/users/miyamonz/orgs", "repos_url": "https://api.github.com/users/miyamonz/repos", "events_url": "https://api.github.com/users/miyamonz/events{/privacy}", "received_events_url": "https://api.github.com/users/miyamonz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The error you got is minor and appeared in the last version of pyarrow, we'll fix the CI to take this into account. You can ignore it" ]
2021-03-12T09:22:00
2021-03-23T06:08:16
2021-03-16T11:01:22
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2037", "html_url": "https://github.com/huggingface/datasets/pull/2037", "diff_url": "https://github.com/huggingface/datasets/pull/2037.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2037.patch", "merged_at": "2021-03-16T11:01:22" }
see: https://github.com/huggingface/datasets/issues/2031 What I did: - replace root.clear with elem.clear - remove lines to get root element - $ make style - $ make test - some tests required some pip packages, I installed them. test results on origin/master and my branch are same. I think it's not related on my modification, isn't it? ``` ==================================================================================== short test summary info ==================================================================================== FAILED tests/test_arrow_writer.py::TypedSequenceTest::test_catch_overflow - AssertionError: OverflowError not raised ============================================================= 1 failed, 2332 passed, 5138 skipped, 70 warnings in 91.75s (0:01:31) ============================================================== make: *** [Makefile:19: test] Error 1 ``` Is there anything else I should do?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2037/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2037/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2036
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2036/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2036/comments
https://api.github.com/repos/huggingface/datasets/issues/2036/events
https://github.com/huggingface/datasets/issues/2036
829,909,258
MDU6SXNzdWU4Mjk5MDkyNTg=
2,036
Cannot load wikitext
{ "login": "Gpwner", "id": 19349207, "node_id": "MDQ6VXNlcjE5MzQ5MjA3", "avatar_url": "https://avatars.githubusercontent.com/u/19349207?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Gpwner", "html_url": "https://github.com/Gpwner", "followers_url": "https://api.github.com/users/Gpwner/followers", "following_url": "https://api.github.com/users/Gpwner/following{/other_user}", "gists_url": "https://api.github.com/users/Gpwner/gists{/gist_id}", "starred_url": "https://api.github.com/users/Gpwner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Gpwner/subscriptions", "organizations_url": "https://api.github.com/users/Gpwner/orgs", "repos_url": "https://api.github.com/users/Gpwner/repos", "events_url": "https://api.github.com/users/Gpwner/events{/privacy}", "received_events_url": "https://api.github.com/users/Gpwner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Solved!" ]
2021-03-12T09:09:39
2021-03-15T08:45:02
2021-03-15T08:44:44
NONE
null
null
null
when I execute these codes ``` >>> from datasets import load_dataset >>> test_dataset = load_dataset("wikitext") ``` I got an error,any help? ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 487, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/wikitext/wikitext.py ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2036/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2036/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2035
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2035/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2035/comments
https://api.github.com/repos/huggingface/datasets/issues/2035/events
https://github.com/huggingface/datasets/issues/2035
829,475,544
MDU6SXNzdWU4Mjk0NzU1NDQ=
2,035
wiki40b/wikipedia for almost all languages cannot be downloaded
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Dear @lhoestq for wikipedia dataset I also get the same error, I greatly appreciate if you could have a look into this dataset as well. Below please find the command to reproduce the error:\r\n\r\n```\r\ndataset = load_dataset(\"wikipedia\", \"20200501.bg\")\r\nprint(dataset)\r\n```\r\n\r\nYour library is my only ...
2021-03-11T19:54:54
2021-03-16T14:53:37
null
NONE
null
null
null
Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error. I really need majority of languages in this dataset to be able to train my models for a deadline and your great scalable super well-written library is my only hope to train the models at scale while being low on resources. thank you very much. ``` (fast) dara@vgne046:/user/dara/dev/codes/seq2seq$ python test_data.py Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to temp/dara/cache_home_2/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f... Traceback (most recent call last): File "test_data.py", line 3, in <module> dataset = load_dataset("wiki40b", "cs") File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset use_auth_token=use_auth_token, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 579, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 1105, in _download_and_prepare import apache_beam as beam File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/__init__.py", line 96, in <module> from apache_beam import io File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/__init__.py", line 23, in <module> from apache_beam.io.avroio import * File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/avroio.py", line 55, in <module> import avro File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 668, in _load_unlocked File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 34, in <module> File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 30, in LoadResource NotADirectoryError: [Errno 20] Not a directory: '/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/VERSION.txt' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2035/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2035/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2034
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2034/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2034/comments
https://api.github.com/repos/huggingface/datasets/issues/2034/events
https://github.com/huggingface/datasets/pull/2034
829,381,388
MDExOlB1bGxSZXF1ZXN0NTkxMDU2MTEw
2,034
Fix typo
{ "login": "pcyin", "id": 3413464, "node_id": "MDQ6VXNlcjM0MTM0NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3413464?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pcyin", "html_url": "https://github.com/pcyin", "followers_url": "https://api.github.com/users/pcyin/followers", "following_url": "https://api.github.com/users/pcyin/following{/other_user}", "gists_url": "https://api.github.com/users/pcyin/gists{/gist_id}", "starred_url": "https://api.github.com/users/pcyin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pcyin/subscriptions", "organizations_url": "https://api.github.com/users/pcyin/orgs", "repos_url": "https://api.github.com/users/pcyin/repos", "events_url": "https://api.github.com/users/pcyin/events{/privacy}", "received_events_url": "https://api.github.com/users/pcyin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-11T17:46:13
2021-03-11T18:06:25
2021-03-11T18:06:25
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2034", "html_url": "https://github.com/huggingface/datasets/pull/2034", "diff_url": "https://github.com/huggingface/datasets/pull/2034.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2034.patch", "merged_at": "2021-03-11T18:06:25" }
Change `ENV_XDG_CACHE_HOME ` to `XDG_CACHE_HOME `
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2034/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2034/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2033
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2033/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2033/comments
https://api.github.com/repos/huggingface/datasets/issues/2033/events
https://github.com/huggingface/datasets/pull/2033
829,295,339
MDExOlB1bGxSZXF1ZXN0NTkwOTgzMDAy
2,033
Raise an error for outdated sacrebleu versions
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-11T16:08:00
2021-03-11T17:58:12
2021-03-11T17:58:12
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2033", "html_url": "https://github.com/huggingface/datasets/pull/2033", "diff_url": "https://github.com/huggingface/datasets/pull/2033.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2033.patch", "merged_at": "2021-03-11T17:58:12" }
The `sacrebleu` metric seem to only work for sacrecleu>=1.4.12 For example using sacrebleu==1.2.10, an error is raised (from metric/sacrebleu/sacrebleu.py): ```python def _compute( self, predictions, references, smooth_method="exp", smooth_value=None, force=False, lowercase=False, tokenize=scb.DEFAULT_TOKENIZER, use_effective_order=False, ): references_per_prediction = len(references[0]) if any(len(refs) != references_per_prediction for refs in references): raise ValueError("Sacrebleu requires the same number of references for each prediction") transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)] > output = scb.corpus_bleu( sys_stream=predictions, ref_streams=transformed_references, smooth_method=smooth_method, smooth_value=smooth_value, force=force, lowercase=lowercase, tokenize=tokenize, use_effective_order=use_effective_order, ) E TypeError: corpus_bleu() got an unexpected keyword argument 'smooth_method' /mnt/cache/modules/datasets_modules/metrics/sacrebleu/b390045b3d1dd4abf6a95c4a2a11ee3bcc2b7620b076204d0ddc353fa649fd86/sacrebleu.py:114: TypeError ``` I improved the error message when users have an outdated version of sacrebleu. The new error message tells the user to update sacrebleu. cc @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2033/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2033/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2032
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2032/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2032/comments
https://api.github.com/repos/huggingface/datasets/issues/2032/events
https://github.com/huggingface/datasets/issues/2032
829,250,912
MDU6SXNzdWU4MjkyNTA5MTI=
2,032
Use Arrow filtering instead of writing a new arrow file for Dataset.filter
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github...
[]
2021-03-11T15:18:50
2021-03-11T17:20:57
null
MEMBER
null
null
null
Currently the filter method reads the dataset batch by batch to write a new, filtered, arrow file on disk. Therefore all the reading + writing can take some time. Using a mask directly on the arrow table doesn't do any read or write operation therefore it's significantly quicker. I think there are two cases: - if the dataset doesn't have an indices mapping, then one can simply use the arrow filtering on the main arrow table `dataset._data.filter(...)` - if the dataset an indices mapping, then the mask should be applied on the indices mapping table `dataset._indices.filter(...)` The indices mapping is used to map between the idx at `dataset[idx]` in `__getitem__` and the idx in the actual arrow table. The new filter method should therefore be faster, and allow users to pass either a filtering function (that returns a boolean given an example), or directly a mask. Feel free to discuss this idea in this thread :) One additional note: the refactor at #2025 would make all the pickle-related stuff work directly with the arrow filtering, so that we only need to change the Dataset.filter method without having to deal with pickle. cc @theo-m @gchhablani related issues: #1796 #1949
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2032/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2032/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2031
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2031/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2031/comments
https://api.github.com/repos/huggingface/datasets/issues/2031/events
https://github.com/huggingface/datasets/issues/2031
829,122,778
MDU6SXNzdWU4MjkxMjI3Nzg=
2,031
wikipedia.py generator that extracts XML doesn't release memory
{ "login": "miyamonz", "id": 6331508, "node_id": "MDQ6VXNlcjYzMzE1MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/miyamonz", "html_url": "https://github.com/miyamonz", "followers_url": "https://api.github.com/users/miyamonz/followers", "following_url": "https://api.github.com/users/miyamonz/following{/other_user}", "gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}", "starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions", "organizations_url": "https://api.github.com/users/miyamonz/orgs", "repos_url": "https://api.github.com/users/miyamonz/repos", "events_url": "https://api.github.com/users/miyamonz/events{/privacy}", "received_events_url": "https://api.github.com/users/miyamonz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @miyamonz \r\nThanks for investigating this issue, good job !\r\nIt would be awesome to integrate your fix in the library, could you open a pull request ?", "OK! I'll send it later." ]
2021-03-11T12:51:24
2021-03-22T08:33:52
2021-03-22T08:33:52
CONTRIBUTOR
null
null
null
I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe. I found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop. https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L464-L502 `root.clear()` intend to clear memory, but it doesn't. https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L490 https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L494 I replaced them with `elem.clear()`, then it seems to work correctly. here is the notebook to reproduce it. https://gist.github.com/miyamonz/dc06117302b6e85fa51cbf46dde6bb51#file-xtract_content-ipynb
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2031/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2031/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2030
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2030/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2030/comments
https://api.github.com/repos/huggingface/datasets/issues/2030/events
https://github.com/huggingface/datasets/pull/2030
829,110,803
MDExOlB1bGxSZXF1ZXN0NTkwODI4NzQ4
2,030
Implement Dataset from text
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I am wondering why only one test of \"keep_in_memory=True\" fails, when there are many other tests that test the same and it happens only in pyarrow_1..." ]
2021-03-11T12:34:50
2021-03-18T13:29:29
2021-03-18T13:29:29
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2030", "html_url": "https://github.com/huggingface/datasets/pull/2030", "diff_url": "https://github.com/huggingface/datasets/pull/2030.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2030.patch", "merged_at": "2021-03-18T13:29:29" }
Implement `Dataset.from_text`. Analogue to #1943, #1946.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2030/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2030/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2029
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2029/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2029/comments
https://api.github.com/repos/huggingface/datasets/issues/2029/events
https://github.com/huggingface/datasets/issues/2029
829,097,290
MDU6SXNzdWU4MjkwOTcyOTA=
2,029
Loading a faiss index KeyError
{ "login": "nbroad1881", "id": 24982805, "node_id": "MDQ6VXNlcjI0OTgyODA1", "avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nbroad1881", "html_url": "https://github.com/nbroad1881", "followers_url": "https://api.github.com/users/nbroad1881/followers", "following_url": "https://api.github.com/users/nbroad1881/following{/other_user}", "gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}", "starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions", "organizations_url": "https://api.github.com/users/nbroad1881/orgs", "repos_url": "https://api.github.com/users/nbroad1881/repos", "events_url": "https://api.github.com/users/nbroad1881/events{/privacy}", "received_events_url": "https://api.github.com/users/nbroad1881/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
[ "In your code `dataset2` doesn't contain the \"embeddings\" column, since it is created from the pandas DataFrame with columns \"text\" and \"label\".\r\n\r\nTherefore when you call `dataset2[embeddings_name]`, you get a `KeyError`.\r\n\r\nIf you want the \"embeddings\" column back, you can create `dataset2` with\r...
2021-03-11T12:16:13
2021-03-12T00:21:09
2021-03-12T00:21:09
NONE
null
null
null
I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation. The basic steps are: 1. Create a dataset (dataset1) 2. Create an embeddings column using DPR 3. Add a faiss index to the dataset 4. Save faiss index to a file 5. Create a new dataset (dataset2) with the same text and label information as dataset1 6. Try to load the faiss index from file to dataset2 7. Get `KeyError: "Column embeddings not in the dataset"` I've made a colab notebook that should show exactly what I did. Please switch to GPU runtime; I didn't check on CPU. https://colab.research.google.com/drive/1X0S9ZuZ8k0ybcoei4w7so6dS_WrABmIx?usp=sharing Ubuntu Version VERSION="18.04.5 LTS (Bionic Beaver)" datasets==1.4.1 faiss==1.5.3 faiss-gpu==1.7.0 torch==1.8.0+cu101 transformers==4.3.3 NVIDIA-SMI 460.56 Driver Version: 460.32.03 CUDA Version: 11.2 Tesla K80 I was basically following the steps here: https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index I included the exact code from the documentation at the end of the notebook to show that they don't work either.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2029/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2029/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2028
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2028/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2028/comments
https://api.github.com/repos/huggingface/datasets/issues/2028/events
https://github.com/huggingface/datasets/pull/2028
828,721,393
MDExOlB1bGxSZXF1ZXN0NTkwNDk1NzEx
2,028
Adding PersiNLU reading-comprehension
{ "login": "danyaljj", "id": 2441454, "node_id": "MDQ6VXNlcjI0NDE0NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danyaljj", "html_url": "https://github.com/danyaljj", "followers_url": "https://api.github.com/users/danyaljj/followers", "following_url": "https://api.github.com/users/danyaljj/following{/other_user}", "gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}", "starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions", "organizations_url": "https://api.github.com/users/danyaljj/orgs", "repos_url": "https://api.github.com/users/danyaljj/repos", "events_url": "https://api.github.com/users/danyaljj/events{/privacy}", "received_events_url": "https://api.github.com/users/danyaljj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@lhoestq I think I have addressed all your comments. ", "Thanks! @lhoestq Let me know if you want me to address anything to get this merged. ", "It's all good thanks ;)\r\nmerging" ]
2021-03-11T04:41:13
2021-03-15T09:39:57
2021-03-15T09:39:57
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2028", "html_url": "https://github.com/huggingface/datasets/pull/2028", "diff_url": "https://github.com/huggingface/datasets/pull/2028.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2028.patch", "merged_at": "2021-03-15T09:39:57" }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2028/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2028/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2027
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2027/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2027/comments
https://api.github.com/repos/huggingface/datasets/issues/2027/events
https://github.com/huggingface/datasets/pull/2027
828,490,444
MDExOlB1bGxSZXF1ZXN0NTkwMjkzNDA1
2,027
Update format columns in Dataset.rename_columns
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-10T23:50:59
2021-03-11T14:38:40
2021-03-11T14:38:40
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2027", "html_url": "https://github.com/huggingface/datasets/pull/2027", "diff_url": "https://github.com/huggingface/datasets/pull/2027.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2027.patch", "merged_at": "2021-03-11T14:38:40" }
Fixes #2026
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2027/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2027/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2026
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2026/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2026/comments
https://api.github.com/repos/huggingface/datasets/issues/2026/events
https://github.com/huggingface/datasets/issues/2026
828,194,467
MDU6SXNzdWU4MjgxOTQ0Njc=
2,026
KeyError on using map after renaming a column
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nActually, the error occurs due to these two lines:\r\n```python\r\nraw_dataset.set_format('torch',columns=['img','label'])\r\nraw_dataset = raw_dataset.rename_column('img','image')\r\n```\r\n`Dataset.rename_column` doesn't update the `_format_columns` attribute, previously defined by `Dataset.set_format...
2021-03-10T18:54:17
2021-03-11T14:39:34
2021-03-11T14:38:40
CONTRIBUTOR
null
null
null
Hi, I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function. Here is what I try: ```python transform = Compose([ToPILImage(),ToTensor(),Normalize([0.0,0.0,0.0],[1.0,1.0,1.0])]) def prepare_features(examples): images = [] labels = [] print(examples) for example_idx, example in enumerate(examples["image"]): if transform is not None: images.append(transform(examples["image"][example_idx].permute(2,0,1))) else: images.append(examples["image"][example_idx].permute(2,0,1)) labels.append(examples["label"][example_idx]) output = {"label":labels, "image":images} return output raw_dataset = load_dataset('cifar10') raw_dataset.set_format('torch',columns=['img','label']) raw_dataset = raw_dataset.rename_column('img','image') features = datasets.Features({ "image": datasets.Array3D(shape=(3,32,32),dtype="float32"), "label": datasets.features.ClassLabel(names=[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck", ]), }) train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000) ``` The error: ```python --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-54-bf29672c53ee> in <module>() 14 ]), 15 }) ---> 16 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000) 2 frames /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1287 test_inputs = self[:2] if batched else self[0] 1288 test_indices = [0, 1] if batched else 0 -> 1289 update_data = does_function_return_dict(test_inputs, test_indices) 1290 logger.info("Testing finished, running the mapping function on the dataset") 1291 /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in does_function_return_dict(inputs, indices) 1258 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns] 1259 processed_inputs = ( -> 1260 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) 1261 ) 1262 does_return_dict = isinstance(processed_inputs, Mapping) <ipython-input-52-b4dccbafb70d> in prepare_features(examples) 3 labels = [] 4 print(examples) ----> 5 for example_idx, example in enumerate(examples["image"]): 6 if transform is not None: 7 images.append(transform(examples["image"][example_idx].permute(2,0,1))) KeyError: 'image' ``` The print statement inside returns this: ```python {'label': tensor([6, 9])} ``` Apparently, both `img` and `image` do not exist after renaming. Note that this code works fine with `img` everywhere. Notebook: https://colab.research.google.com/drive/1SzESAlz3BnVYrgQeJ838vbMp1OsukiA2?usp=sharing
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2026/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2026/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2025
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2025/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2025/comments
https://api.github.com/repos/huggingface/datasets/issues/2025/events
https://github.com/huggingface/datasets/pull/2025
828,047,476
MDExOlB1bGxSZXF1ZXN0NTg5ODk2NjMz
2,025
[Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "There is one more thing I would love to see. Let's say we iteratively keep updating a data source that loaded from **load_dataset** or **load_from_disk**. Now we need to save it to the same location by overriding the previous file inorder to save the disk space. At the moment **save_to_disk** can not assign a name...
2021-03-10T17:00:47
2021-03-30T14:46:53
2021-03-26T16:51:59
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2025", "html_url": "https://github.com/huggingface/datasets/pull/2025", "diff_url": "https://github.com/huggingface/datasets/pull/2025.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2025.patch", "merged_at": "2021-03-26T16:51:58" }
## Intro Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files). This assumption is used for pickling for example: - in-memory dataset can just be pickled/unpickled in-memory - on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling ## Issues Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk. Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk. ## Solution provided in this PR I changed this by allowing several types of Table to be used in the Dataset object. More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable. The in-memory and memory-mapped tables implement the pickling behavior described above. The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks. ## Implementation details The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table. Regarding the MemoryMappedTable: Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk. ## Checklist - [x] add InMemoryTable - [x] add MemoryMappedTable - [x] add ConcatenationTable - [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter - [x] Update Dataset.from_xxx methods - [x] Update load_from_disk and save_to_disk - [x] Backward compatibility of load_from_disk - [x] Add tests for the new tables - [x] Update current tests - [ ] Documentation ---------- I would be happy to discuss the design of this PR :) Close #1877
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2025/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2025/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2024
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2024/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2024/comments
https://api.github.com/repos/huggingface/datasets/issues/2024/events
https://github.com/huggingface/datasets/pull/2024
827,842,962
MDExOlB1bGxSZXF1ZXN0NTg5NzEzNDAy
2,024
Remove print statement from mnist.py
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for noticing !\r\n#2020 fixed this earlier today though ^^'\r\n\r\nClosing this one" ]
2021-03-10T14:39:58
2021-03-11T18:03:52
2021-03-11T18:03:51
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2024", "html_url": "https://github.com/huggingface/datasets/pull/2024", "diff_url": "https://github.com/huggingface/datasets/pull/2024.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2024.patch", "merged_at": null }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2024/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2024/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2023
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2023/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2023/comments
https://api.github.com/repos/huggingface/datasets/issues/2023/events
https://github.com/huggingface/datasets/pull/2023
827,819,608
MDExOlB1bGxSZXF1ZXN0NTg5NjkyNDU2
2,023
Add Romanian to XQuAD
{ "login": "M-Salti", "id": 9285264, "node_id": "MDQ6VXNlcjkyODUyNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/M-Salti", "html_url": "https://github.com/M-Salti", "followers_url": "https://api.github.com/users/M-Salti/followers", "following_url": "https://api.github.com/users/M-Salti/following{/other_user}", "gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}", "starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions", "organizations_url": "https://api.github.com/users/M-Salti/orgs", "repos_url": "https://api.github.com/users/M-Salti/repos", "events_url": "https://api.github.com/users/M-Salti/events{/privacy}", "received_events_url": "https://api.github.com/users/M-Salti/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi ! Thanks for updating XQUAD :)\r\n\r\nThe slow test is failing though since there's no dummy data nor metadata in dataset_infos.json for the romanian configuration.\r\n\r\nCould you please generate the dummy data with\r\n```\r\ndatasets-cli dummy_data ./datasets/xquad --auto_generate --json_field data\r\n```\r\...
2021-03-10T14:24:32
2021-03-15T10:08:17
2021-03-15T10:08:17
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2023", "html_url": "https://github.com/huggingface/datasets/pull/2023", "diff_url": "https://github.com/huggingface/datasets/pull/2023.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2023.patch", "merged_at": "2021-03-15T10:08:17" }
On Jan 18, XQuAD was updated with a new Romanian validation file ([xquad commit link](https://github.com/deepmind/xquad/commit/60cac411649156efb6aab9dd4c9cde787a2c0345))
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2023/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2023/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2022
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2022/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2022/comments
https://api.github.com/repos/huggingface/datasets/issues/2022/events
https://github.com/huggingface/datasets/issues/2022
827,435,033
MDU6SXNzdWU4Mjc0MzUwMzM=
2,022
ValueError when rename_column on splitted dataset
{ "login": "simonschoe", "id": 53626067, "node_id": "MDQ6VXNlcjUzNjI2MDY3", "avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4", "gravatar_id": "", "url": "https://api.github.com/users/simonschoe", "html_url": "https://github.com/simonschoe", "followers_url": "https://api.github.com/users/simonschoe/followers", "following_url": "https://api.github.com/users/simonschoe/following{/other_user}", "gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}", "starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions", "organizations_url": "https://api.github.com/users/simonschoe/orgs", "repos_url": "https://api.github.com/users/simonschoe/repos", "events_url": "https://api.github.com/users/simonschoe/events{/privacy}", "received_events_url": "https://api.github.com/users/simonschoe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nThis is a bug so thanks for reporting it. `Dataset.__setstate__` is the problem, which is called when `Dataset.rename_column` tries to copy the dataset with `copy.deepcopy(self)`. This only happens if the `split` arg in `load_dataset` was defined as `ReadInstruction`.\r\n\r\nTo overcome this issue, use...
2021-03-10T09:40:38
2021-03-16T14:06:08
2021-03-16T14:05:05
NONE
null
null
null
Hi there, I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so: ```python split = { 'train': ReadInstruction('train', to=90, unit='%'), 'test': ReadInstruction('train', from_=-10, unit='%') } dataset = load_dataset( path='csv', # use 'text' loading script to load from local txt-files delimiter='\t', # xxx data_files=text_files, # list of paths to local text files split=split, # xxx ) dataset ``` Part of output: ```python DatasetDict({ train: Dataset({ features: ['sentence', 'sentiment'], num_rows: 900 }) test: Dataset({ features: ['sentence', 'sentiment'], num_rows: 100 }) }) ``` Afterwards I'd like to rename the 'sentence' column to 'text' in order to be compatible with my modelin pipeline. If I run the following code I experience a `ValueError` however: ```python dataset['train'].rename_column('sentence', 'text') ``` ```python /usr/local/lib/python3.7/dist-packages/datasets/splits.py in __init__(self, name) 353 for split_name in split_names_from_instruction: 354 if not re.match(_split_re, split_name): --> 355 raise ValueError(f"Split name should match '{_split_re}'' but got '{split_name}'.") 356 357 def __str__(self): ValueError: Split name should match '^\w+(\.\w+)*$'' but got 'ReadInstruction('. ``` In particular, these behavior does not arise if I use the deprecated `rename_column_` method. Any idea what causes the error? Would assume something in the way I defined the split. Thanks in advance! :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2022/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2022/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2021
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2021/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2021/comments
https://api.github.com/repos/huggingface/datasets/issues/2021/events
https://github.com/huggingface/datasets/issues/2021
826,988,016
MDU6SXNzdWU4MjY5ODgwMTY=
2,021
Interactively doing save_to_disk and load_from_disk corrupts the datasets object?
{ "login": "shamanez", "id": 16892570, "node_id": "MDQ6VXNlcjE2ODkyNTcw", "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shamanez", "html_url": "https://github.com/shamanez", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "organizations_url": "https://api.github.com/users/shamanez/orgs", "repos_url": "https://api.github.com/users/shamanez/repos", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "received_events_url": "https://api.github.com/users/shamanez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nCan you give us a minimal reproducible example? This [part](https://huggingface.co/docs/datasets/master/processing.html#controling-the-cache-behavior) of the docs explains how to control caching." ]
2021-03-10T02:48:34
2021-03-13T10:07:41
2021-03-13T10:07:41
NONE
null
null
null
dataset_info.json file saved after using save_to_disk gets corrupted as follows. ![image](https://user-images.githubusercontent.com/16892570/110568474-ed969880-81b7-11eb-832f-2e5129656016.png) Is there a way to disable the cache that will save to /tmp/huggiface/datastes ? I have a feeling there is a serious issue with cashing.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2021/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2021/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2020
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2020/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2020/comments
https://api.github.com/repos/huggingface/datasets/issues/2020/events
https://github.com/huggingface/datasets/pull/2020
826,961,126
MDExOlB1bGxSZXF1ZXN0NTg4OTE3MjYx
2,020
Remove unnecessary docstart check in conll-like datasets
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-10T02:20:16
2021-03-11T13:33:37
2021-03-11T13:33:37
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2020", "html_url": "https://github.com/huggingface/datasets/pull/2020", "diff_url": "https://github.com/huggingface/datasets/pull/2020.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2020.patch", "merged_at": "2021-03-11T13:33:37" }
Related to this PR: #1998 Additionally, this PR adds the docstart note to the conll2002 dataset card ([link](https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/ned.train) to the raw data with `DOCSTART` lines).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2020/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2020/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2019
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2019/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2019/comments
https://api.github.com/repos/huggingface/datasets/issues/2019/events
https://github.com/huggingface/datasets/pull/2019
826,625,706
MDExOlB1bGxSZXF1ZXN0NTg4NjEyODgy
2,019
Replace print with logging in dataset scripts
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@lhoestq Maybe a script or even a test in `test_dataset_common.py` that verifies that a dataset script meets some set of quality standards (print calls and todos from the dataset script template are not present, etc.) could be added?", "Yes definitely !" ]
2021-03-09T20:59:34
2021-03-12T10:09:01
2021-03-11T16:14:19
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2019", "html_url": "https://github.com/huggingface/datasets/pull/2019", "diff_url": "https://github.com/huggingface/datasets/pull/2019.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2019.patch", "merged_at": "2021-03-11T16:14:18" }
Replaces `print(...)` in the dataset scripts with the library logger.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2019/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2019/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2018
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2018/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2018/comments
https://api.github.com/repos/huggingface/datasets/issues/2018/events
https://github.com/huggingface/datasets/pull/2018
826,473,764
MDExOlB1bGxSZXF1ZXN0NTg4NDc0NTQz
2,018
Md gender card update
{ "login": "mcmillanmajora", "id": 26722925, "node_id": "MDQ6VXNlcjI2NzIyOTI1", "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mcmillanmajora", "html_url": "https://github.com/mcmillanmajora", "followers_url": "https://api.github.com/users/mcmillanmajora/followers", "following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}", "gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}", "starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions", "organizations_url": "https://api.github.com/users/mcmillanmajora/orgs", "repos_url": "https://api.github.com/users/mcmillanmajora/repos", "events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}", "received_events_url": "https://api.github.com/users/mcmillanmajora/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Link to the card: https://github.com/mcmillanmajora/datasets/blob/md-gender-card/datasets/md_gender_bias/README.md", "dataset card* @sgugger :p ", "Ahah that's what I wanted to say @lhoestq, thanks for fixing. Not used to review the Datasets side ;-)" ]
2021-03-09T18:57:20
2021-03-12T17:31:00
2021-03-12T17:31:00
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2018", "html_url": "https://github.com/huggingface/datasets/pull/2018", "diff_url": "https://github.com/huggingface/datasets/pull/2018.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2018.patch", "merged_at": "2021-03-12T17:31:00" }
I updated the descriptions of the datasets as they appear in the HF repo and the descriptions of the source datasets according to what I could find from the paper and the references. I'm still a little unclear about some of the fields of the different configs, and there was little info on the word list and name list. I'll contact the authors to see if they have any additional information or suggested changes.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2018/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2018/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2017
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2017/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2017/comments
https://api.github.com/repos/huggingface/datasets/issues/2017/events
https://github.com/huggingface/datasets/pull/2017
826,428,578
MDExOlB1bGxSZXF1ZXN0NTg4NDMyNDc2
2,017
Add TF-based Features to handle different modes of data
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-09T18:29:52
2021-03-17T12:32:08
2021-03-17T12:32:07
CONTRIBUTOR
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2017", "html_url": "https://github.com/huggingface/datasets/pull/2017", "diff_url": "https://github.com/huggingface/datasets/pull/2017.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2017.patch", "merged_at": null }
Hi, I am creating this draft PR to work on add features similar to [TF datasets](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/core/features). I'll be starting with `Tensor` and `FeatureConnector` classes, and build upon them to add other features as well. This is a work in progress.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2017/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2017/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2016
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2016/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2016/comments
https://api.github.com/repos/huggingface/datasets/issues/2016/events
https://github.com/huggingface/datasets/pull/2016
825,965,493
MDExOlB1bGxSZXF1ZXN0NTg4MDA5NjEz
2,016
Not all languages have 2 digit codes.
{ "login": "asiddhant", "id": 13891775, "node_id": "MDQ6VXNlcjEzODkxNzc1", "avatar_url": "https://avatars.githubusercontent.com/u/13891775?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asiddhant", "html_url": "https://github.com/asiddhant", "followers_url": "https://api.github.com/users/asiddhant/followers", "following_url": "https://api.github.com/users/asiddhant/following{/other_user}", "gists_url": "https://api.github.com/users/asiddhant/gists{/gist_id}", "starred_url": "https://api.github.com/users/asiddhant/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/asiddhant/subscriptions", "organizations_url": "https://api.github.com/users/asiddhant/orgs", "repos_url": "https://api.github.com/users/asiddhant/repos", "events_url": "https://api.github.com/users/asiddhant/events{/privacy}", "received_events_url": "https://api.github.com/users/asiddhant/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-09T13:53:39
2021-03-11T18:01:03
2021-03-11T18:01:03
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2016", "html_url": "https://github.com/huggingface/datasets/pull/2016", "diff_url": "https://github.com/huggingface/datasets/pull/2016.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2016.patch", "merged_at": "2021-03-11T18:01:03" }
.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2016/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2016/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2015
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2015/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2015/comments
https://api.github.com/repos/huggingface/datasets/issues/2015/events
https://github.com/huggingface/datasets/pull/2015
825,942,108
MDExOlB1bGxSZXF1ZXN0NTg3OTg4NTQ0
2,015
Fix ipython function creation in tests
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-09T13:36:59
2021-03-09T14:06:04
2021-03-09T14:06:03
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2015", "html_url": "https://github.com/huggingface/datasets/pull/2015", "diff_url": "https://github.com/huggingface/datasets/pull/2015.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2015.patch", "merged_at": "2021-03-09T14:06:03" }
The test at `tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function` was failing in python 3.8 because the ipython function was not properly created. Fix #2010
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2015/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2014
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2014/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2014/comments
https://api.github.com/repos/huggingface/datasets/issues/2014/events
https://github.com/huggingface/datasets/pull/2014
825,916,531
MDExOlB1bGxSZXF1ZXN0NTg3OTY1NDg3
2,014
more explicit method parameters
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-09T13:18:29
2021-03-10T10:08:37
2021-03-10T10:08:36
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2014", "html_url": "https://github.com/huggingface/datasets/pull/2014", "diff_url": "https://github.com/huggingface/datasets/pull/2014.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2014.patch", "merged_at": "2021-03-10T10:08:36" }
re: #2009 not super convinced this is better, and while I usually fight against kwargs here it seems to me that it better conveys the relationship to the `_split_generator` method.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2014/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2014/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2013
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2013/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2013/comments
https://api.github.com/repos/huggingface/datasets/issues/2013/events
https://github.com/huggingface/datasets/pull/2013
825,694,305
MDExOlB1bGxSZXF1ZXN0NTg3NzYzMTgx
2,013
Add Cryptonite dataset
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-09T10:32:11
2021-03-09T19:27:07
2021-03-09T19:27:06
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2013", "html_url": "https://github.com/huggingface/datasets/pull/2013", "diff_url": "https://github.com/huggingface/datasets/pull/2013.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2013.patch", "merged_at": "2021-03-09T19:27:06" }
cc @aviaefrat who's the original author of the dataset & paper, see https://github.com/aviaefrat/cryptonite
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2013/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2012
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2012/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2012/comments
https://api.github.com/repos/huggingface/datasets/issues/2012/events
https://github.com/huggingface/datasets/issues/2012
825,634,064
MDU6SXNzdWU4MjU2MzQwNjQ=
2,012
No upstream branch
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
[ "What's the issue exactly ?\r\n\r\nGiven an `upstream` remote repository with url `https://github.com/huggingface/datasets.git`, you can totally rebase from `upstream/master`.\r\n\r\nIt's mentioned at the beginning how to add the `upstream` remote repository\r\n\r\nhttps://github.com/huggingface/datasets/blob/987df...
2021-03-09T09:48:55
2021-03-09T11:33:31
2021-03-09T11:33:31
CONTRIBUTOR
null
null
null
Feels like the documentation on adding a new dataset is outdated? https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54 There is no upstream branch on remote.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2012/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2012/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2011
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2011/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2011/comments
https://api.github.com/repos/huggingface/datasets/issues/2011/events
https://github.com/huggingface/datasets/pull/2011
825,621,952
MDExOlB1bGxSZXF1ZXN0NTg3Njk4MTAx
2,011
Add RoSent Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-09T09:40:08
2021-03-11T18:00:52
2021-03-11T18:00:52
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2011", "html_url": "https://github.com/huggingface/datasets/pull/2011", "diff_url": "https://github.com/huggingface/datasets/pull/2011.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2011.patch", "merged_at": "2021-03-11T18:00:52" }
This PR adds a Romanian sentiment analysis dataset. This PR also closes pending PR #1529. I had to add an `original_id` feature because the dataset files have repeated IDs. I can remove them if needed. I have also added `id` which is unique. Let me know in case of any issues.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2011/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2011/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2010
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2010/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2010/comments
https://api.github.com/repos/huggingface/datasets/issues/2010/events
https://github.com/huggingface/datasets/issues/2010
825,567,635
MDU6SXNzdWU4MjU1Njc2MzU=
2,010
Local testing fails
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
[ "I'm not able to reproduce on my side.\r\nCan you provide the full stacktrace please ?\r\nWhat version of `python` and `dill` do you have ? Which OS are you using ?", "```\r\nco_filename = '<ipython-input-2-e0383a102aae>', returned_obj = [0]\r\n ...
2021-03-09T09:01:38
2021-03-09T14:06:03
2021-03-09T14:06:03
CONTRIBUTOR
null
null
null
I'm following the CI setup as described in https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19 in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4 and getting ``` FAILED tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function - TypeError: an integer is required (got type bytes) 1 failed, 2321 passed, 5109 skipped, 10 warnings in 124.32s (0:02:04) ``` Seems like a discrepancy with CI, perhaps a lib version that's not controlled? Tried with `pyarrow=={1.0.0,0.17.1,2.0.0}`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2010/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2010/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2009
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2009/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2009/comments
https://api.github.com/repos/huggingface/datasets/issues/2009/events
https://github.com/huggingface/datasets/issues/2009
825,541,366
MDU6SXNzdWU4MjU1NDEzNjY=
2,009
Ambiguous documentation
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
{ "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "organizations_url": "https://api.github.com/users/theo-m/orgs", "repos_url": "https://api.github.com/users/theo-m/repos", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "received_events_url": "https://api.github.com/users/theo-m/received_events", "type": "User", "site_admin": false }
[ { "login": "theo-m", "id": 17948980, "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theo-m", "html_url": "https://github.com/theo-m", "followers_url": "https://api.github...
[ "Hi @theo-m !\r\n\r\nA few lines above this line, you'll find that the `_split_generators` method returns a list of `SplitGenerator`s objects:\r\n\r\n```python\r\ndatasets.SplitGenerator(\r\n name=datasets.Split.VALIDATION,\r\n # These kwargs will be passed to _generate_examples\r\n gen_kwargs={\r\n ...
2021-03-09T08:42:11
2021-03-12T15:01:34
2021-03-12T15:01:34
CONTRIBUTOR
null
null
null
https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158 Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming from. Happy to push a PR with a clearer statement when I understand the meaning.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2009/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2009/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2008
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2008/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2008/comments
https://api.github.com/repos/huggingface/datasets/issues/2008/events
https://github.com/huggingface/datasets/pull/2008
825,153,804
MDExOlB1bGxSZXF1ZXN0NTg3Mjc1Njk4
2,008
Fix various typos/grammer in the docs
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "What do yo think of the documentation btw ?\r\nWhat parts would you like to see improved ?", "I like how concise and straightforward the docs are.\r\n\r\nFew things that would further improve the docs IMO:\r\n* the usage example of `Dataset.formatted_as` in https://huggingface.co/docs/datasets/master/processing....
2021-03-09T01:39:28
2021-03-15T18:42:49
2021-03-09T10:21:32
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2008", "html_url": "https://github.com/huggingface/datasets/pull/2008", "diff_url": "https://github.com/huggingface/datasets/pull/2008.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2008.patch", "merged_at": "2021-03-09T10:21:32" }
This PR: * fixes various typos/grammer I came across while reading the docs * adds the "Install with conda" installation instructions Closes #1959
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2008/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2008/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2007
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2007/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2007/comments
https://api.github.com/repos/huggingface/datasets/issues/2007/events
https://github.com/huggingface/datasets/issues/2007
824,518,158
MDU6SXNzdWU4MjQ1MTgxNTg=
2,007
How to not load huggingface datasets into memory
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "So maybe a summary here: \r\nIf I could fit a large model with batch_size = X into memory, is there a way I could train this model for huge datasets with keeping setting the same? thanks ", "The `datastets` library doesn't load datasets into memory. Therefore you can load a dataset that is terabytes big without ...
2021-03-08T12:35:26
2021-08-04T18:02:25
2021-08-04T18:02:25
NONE
null
null
null
Hi I am running this example from transformers library version 4.3.3: (Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box) USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google/mt5-base --dataset_name wmt16 --dataset_config_name ro-en --source_prefix "translate English to Romanian: " --task translation_en_to_ro --output_dir /test/test_large --do_train --do_eval --predict_with_generate --max_train_samples 500 --max_val_samples 500 --max_source_length 128 --max_target_length 128 --sortish_sampler --per_device_train_batch_size 8 --val_max_target_length 128 --deepspeed ds_config.json --num_train_epochs 1 --eval_steps 25000 --warmup_steps 500 --overwrite_output_dir (Here please find the script: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py) If you do not pass max_train_samples in above command to load the full dataset, then I get memory issue on a gpu with 24 GigBytes of memory. I need to train large-scale mt5 model on large-scale datasets of wikipedia (multiple of them concatenated or other datasets in multiple languages like OPUS), could you help me how I can avoid loading the full data into memory? to make the scripts not related to data size? In above example, I was hoping the script could work without relying on dataset size, so I can still train the model without subsampling training set. thank you so much @lhoestq for your great help in advance
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2007/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2007/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2006
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2006/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2006/comments
https://api.github.com/repos/huggingface/datasets/issues/2006/events
https://github.com/huggingface/datasets/pull/2006
824,457,794
MDExOlB1bGxSZXF1ZXN0NTg2Njg5Nzk2
2,006
Don't gitignore dvc.lock
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
2021-03-08T11:13:08
2021-03-08T11:28:35
2021-03-08T11:28:34
MEMBER
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2006", "html_url": "https://github.com/huggingface/datasets/pull/2006", "diff_url": "https://github.com/huggingface/datasets/pull/2006.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2006.patch", "merged_at": "2021-03-08T11:28:34" }
The benchmarks runs are [failing](https://github.com/huggingface/datasets/runs/2055534629?check_suite_focus=true) because of ``` ERROR: 'dvc.lock' is git-ignored. ``` I removed the dvc.lock file from the gitignore to fix that
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2006/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2006/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2005/comments
https://api.github.com/repos/huggingface/datasets/issues/2005/events
https://github.com/huggingface/datasets/issues/2005
824,275,035
MDU6SXNzdWU4MjQyNzUwMzU=
2,005
Setting to torch format not working with torchvision and MNIST
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Adding to the previous information, I think `torch.utils.data.DataLoader` is doing some conversion. \r\nWhat I tried:\r\n```python\r\ntrain_dataset = load_dataset('mnist')\r\n```\r\nI don't use any `map` or `set_format` or any `transform`. I use this directly, and try to load batches using the `DataLoader` with ba...
2021-03-08T07:38:11
2021-03-09T17:58:13
2021-03-09T17:58:13
CONTRIBUTOR
null
null
null
Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```python def prepare_features(examples): images = [] labels = [] for example_idx, example in enumerate(examples["image"]): if transform is not None: images.append(transform( np.array(examples["image"][example_idx], dtype=np.uint8) )) else: images.append(torch.tensor(np.array(examples["image"][example_idx], dtype=np.uint8))) labels.append(torch.tensor(examples["label"][example_idx])) output = {"label":labels, "image":images} return output raw_dataset = load_dataset('mnist') train_dataset = raw_dataset.map(prepare_features, batched=True, batch_size=10000) train_dataset.set_format("torch",columns=["image","label"]) ``` After this, I check the type of the following: ```python print(type(train_dataset["train"]["label"])) print(type(train_dataset["train"]["image"][0])) ``` This leads to the following output: ```python <class 'torch.Tensor'> <class 'list'> ``` I use `torch.utils.DataLoader` for batches, the type of `batch["train"]["image"]` is also `<class 'list'>`. I don't understand why only the `label` is converted to a torch tensor, why does the image not get converted? How can I fix this issue? Thanks, Gunjan EDIT: I just checked the shapes, and the types, `batch[image]` is a actually a list of list of tensors. Shape is (1,28,2,28), where `batch_size` is 2. I don't understand why this is happening. Ideally it should be a tensor of shape (2,1,28,28). EDIT 2: Inside `prepare_train_features`, the shape of `images[0]` is `torch.Size([1,28,28])`, the conversion is working. However, the output of the `map` is a list of list of list of list.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2005/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2004
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2004/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2004/comments
https://api.github.com/repos/huggingface/datasets/issues/2004/events
https://github.com/huggingface/datasets/pull/2004
824,080,760
MDExOlB1bGxSZXF1ZXN0NTg2MzcyODY1
2,004
LaRoSeDa
{ "login": "MihaelaGaman", "id": 6823177, "node_id": "MDQ6VXNlcjY4MjMxNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/6823177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MihaelaGaman", "html_url": "https://github.com/MihaelaGaman", "followers_url": "https://api.github.com/users/MihaelaGaman/followers", "following_url": "https://api.github.com/users/MihaelaGaman/following{/other_user}", "gists_url": "https://api.github.com/users/MihaelaGaman/gists{/gist_id}", "starred_url": "https://api.github.com/users/MihaelaGaman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MihaelaGaman/subscriptions", "organizations_url": "https://api.github.com/users/MihaelaGaman/orgs", "repos_url": "https://api.github.com/users/MihaelaGaman/repos", "events_url": "https://api.github.com/users/MihaelaGaman/events{/privacy}", "received_events_url": "https://api.github.com/users/MihaelaGaman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@lhoestq all the changes requested are implemented. Thank you for your time and feedback :)" ]
2021-03-08T01:06:32
2021-03-17T10:43:20
2021-03-17T10:43:20
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2004", "html_url": "https://github.com/huggingface/datasets/pull/2004", "diff_url": "https://github.com/huggingface/datasets/pull/2004.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2004.patch", "merged_at": "2021-03-17T10:43:20" }
Add LaRoSeDa to huggingface datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2004/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2004/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2003
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2003/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2003/comments
https://api.github.com/repos/huggingface/datasets/issues/2003/events
https://github.com/huggingface/datasets/issues/2003
824,034,678
MDU6SXNzdWU4MjQwMzQ2Nzg=
2,003
Messages are being printed to the `stdout`
{ "login": "mahnerak", "id": 1367529, "node_id": "MDQ6VXNlcjEzNjc1Mjk=", "avatar_url": "https://avatars.githubusercontent.com/u/1367529?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mahnerak", "html_url": "https://github.com/mahnerak", "followers_url": "https://api.github.com/users/mahnerak/followers", "following_url": "https://api.github.com/users/mahnerak/following{/other_user}", "gists_url": "https://api.github.com/users/mahnerak/gists{/gist_id}", "starred_url": "https://api.github.com/users/mahnerak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mahnerak/subscriptions", "organizations_url": "https://api.github.com/users/mahnerak/orgs", "repos_url": "https://api.github.com/users/mahnerak/repos", "events_url": "https://api.github.com/users/mahnerak/events{/privacy}", "received_events_url": "https://api.github.com/users/mahnerak/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is expected to show this message to the user via stdout.\r\nThis way the users see it directly and can cancel the downloading if they want to.\r\nCould you elaborate why it would be better to have it in stderr instead of stdout ?", "@lhoestq, sorry for the late reply\r\n\r\nI completely understand why you d...
2021-03-07T22:09:34
2023-07-25T16:35:21
2023-07-25T16:35:21
NONE
null
null
null
In this code segment, we can see some messages are being printed to the `stdout`. https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554 According to the comment, it is done intentionally, but I don't really understand why don't we log it with a higher level or print it directly to the `stderr`. In my opinion, this kind of messages should never printed to the stdout. At least some configuration/flag should make it possible to provide in order to explicitly prevent the package to contaminate the stdout.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2003/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2002
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2002/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2002/comments
https://api.github.com/repos/huggingface/datasets/issues/2002/events
https://github.com/huggingface/datasets/pull/2002
823,955,744
MDExOlB1bGxSZXF1ZXN0NTg2MjgwNzE3
2,002
MOROCO
{ "login": "MihaelaGaman", "id": 6823177, "node_id": "MDQ6VXNlcjY4MjMxNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/6823177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MihaelaGaman", "html_url": "https://github.com/MihaelaGaman", "followers_url": "https://api.github.com/users/MihaelaGaman/followers", "following_url": "https://api.github.com/users/MihaelaGaman/following{/other_user}", "gists_url": "https://api.github.com/users/MihaelaGaman/gists{/gist_id}", "starred_url": "https://api.github.com/users/MihaelaGaman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MihaelaGaman/subscriptions", "organizations_url": "https://api.github.com/users/MihaelaGaman/orgs", "repos_url": "https://api.github.com/users/MihaelaGaman/repos", "events_url": "https://api.github.com/users/MihaelaGaman/events{/privacy}", "received_events_url": "https://api.github.com/users/MihaelaGaman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@lhoestq Thank you for all the feedback. I've added the suggested changes in my last commit." ]
2021-03-07T16:22:17
2021-03-19T09:52:06
2021-03-19T09:52:06
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2002", "html_url": "https://github.com/huggingface/datasets/pull/2002", "diff_url": "https://github.com/huggingface/datasets/pull/2002.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2002.patch", "merged_at": "2021-03-19T09:52:06" }
Add MOROCO to huggingface datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2002/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2002/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2001
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2001/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2001/comments
https://api.github.com/repos/huggingface/datasets/issues/2001/events
https://github.com/huggingface/datasets/issues/2001
823,946,706
MDU6SXNzdWU4MjM5NDY3MDY=
2,001
Empty evidence document ("provenance") in KILT ELI5 dataset
{ "login": "donggyukimc", "id": 16605764, "node_id": "MDQ6VXNlcjE2NjA1NzY0", "avatar_url": "https://avatars.githubusercontent.com/u/16605764?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donggyukimc", "html_url": "https://github.com/donggyukimc", "followers_url": "https://api.github.com/users/donggyukimc/followers", "following_url": "https://api.github.com/users/donggyukimc/following{/other_user}", "gists_url": "https://api.github.com/users/donggyukimc/gists{/gist_id}", "starred_url": "https://api.github.com/users/donggyukimc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donggyukimc/subscriptions", "organizations_url": "https://api.github.com/users/donggyukimc/orgs", "repos_url": "https://api.github.com/users/donggyukimc/repos", "events_url": "https://api.github.com/users/donggyukimc/events{/privacy}", "received_events_url": "https://api.github.com/users/donggyukimc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Why did you close this issue? How did you end up finding the evidence documents? I'm running into a similar issue with other KILT tasks." ]
2021-03-07T15:41:35
2022-12-19T19:25:14
2021-03-17T05:51:01
NONE
null
null
null
In the original KILT benchmark(https://github.com/facebookresearch/KILT), all samples has its evidence document (i.e. wikipedia page id) for prediction. For example, a sample in ELI5 dataset has the format including provenance (=evidence document) like this `{"id": "1kiwfx", "input": "In Trading Places (1983, Akroyd/Murphy) how does the scheme at the end of the movie work? Why would buying a lot of OJ at a high price ruin the Duke Brothers?", "output": [{"answer": "I feel so old. People have been askinbg what happened at the end of this movie for what must be the last 15 years of my life. It never stops. Every year/month/fortnight, I see someone asking what happened, and someone explaining. Andf it will keep on happening, until I am 90yrs old, in a home, with nothing but the Internet and my bladder to keep me going. And there it will be: \"what happens at the end of Trading Places?\""}, {"provenance": [{"wikipedia_id": "242855", "title": "Futures contract", "section": "Section::::Abstract.", "start_paragraph_id": 1, "start_character": 14, "end_paragraph_id": 1, "end_character": 612, "bleu_score": 0.9232808519770748}]}], "meta": {"partial_evidence": [{"wikipedia_id": "520990", "title": "Trading Places", "section": "Section::::Plot.\n", "start_paragraph_id": 7, "end_paragraph_id": 7, "meta": {"evidence_span": ["On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts.", "On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts. Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice.", "Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice."]}}]}}` However, KILT ELI5 dataset from huggingface datasets library only contain empty list of provenance. `{'id': '1oy5tc', 'input': 'in football whats the point of wasting the first two plays with a rush - up the middle - not regular rush plays i get those', 'meta': {'left_context': '', 'mention': '', 'obj_surface': [], 'partial_evidence': [], 'right_context': '', 'sub_surface': [], 'subj_aliases': [], 'template_questions': []}, 'output': [{'answer': 'In most cases the O-Line is supposed to make a hole for the running back to go through. If you run too many plays to the outside/throws the defense will catch on.\n\nAlso, 2 5 yard plays gets you a new set of downs.', 'meta': {'score': 2}, 'provenance': []}, {'answer': "I you don't like those type of plays, watch CFL. We only get 3 downs so you can't afford to waste one. Lots more passing.", 'meta': {'score': 2}, 'provenance': []}]} ` should i perform other procedure to obtain evidence documents?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2001/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2001/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2000
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2000/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2000/comments
https://api.github.com/repos/huggingface/datasets/issues/2000/events
https://github.com/huggingface/datasets/issues/2000
823,899,910
MDU6SXNzdWU4MjM4OTk5MTA=
2,000
Windows Permission Error (most recent version of datasets)
{ "login": "itsLuisa", "id": 73881148, "node_id": "MDQ6VXNlcjczODgxMTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/73881148?v=4", "gravatar_id": "", "url": "https://api.github.com/users/itsLuisa", "html_url": "https://github.com/itsLuisa", "followers_url": "https://api.github.com/users/itsLuisa/followers", "following_url": "https://api.github.com/users/itsLuisa/following{/other_user}", "gists_url": "https://api.github.com/users/itsLuisa/gists{/gist_id}", "starred_url": "https://api.github.com/users/itsLuisa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/itsLuisa/subscriptions", "organizations_url": "https://api.github.com/users/itsLuisa/orgs", "repos_url": "https://api.github.com/users/itsLuisa/repos", "events_url": "https://api.github.com/users/itsLuisa/events{/privacy}", "received_events_url": "https://api.github.com/users/itsLuisa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @itsLuisa !\r\n\r\nCould you give us more information about the error you're getting, please?\r\nA copy-paste of the Traceback would be nice to get a better understanding of what is wrong :) ", "Hello @SBrandeis , this is it:\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Luisa\\AppData\\...
2021-03-07T11:55:28
2021-03-09T12:42:57
2021-03-09T12:42:57
NONE
null
null
null
Hi everyone, Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from three local three-column tsv-files (id\ttokens\tpos_tags\n). I am using the most recent version of datasets. Thank you in advance! Luisa My script: ``` import datasets import csv logger = datasets.logging.get_logger(__name__) class SampleConfig(datasets.BuilderConfig): def __init__(self, **kwargs): super(SampleConfig, self).__init__(**kwargs) class Sample(datasets.GeneratorBasedBuilder): BUILDER_CONFIGS = [ SampleConfig(name="conll2003", version=datasets.Version("1.0.0"), description="Conll2003 dataset"), ] def _info(self): return datasets.DatasetInfo( description="Dataset with words and their POS-Tags", features=datasets.Features( { "id": datasets.Value("string"), "tokens": datasets.Sequence(datasets.Value("string")), "pos_tags": datasets.Sequence( datasets.features.ClassLabel( names=[ "''", ",", "-LRB-", "-RRB-", ".", ":", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "MD", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WRB", "``" ] ) ), } ), supervised_keys=None, homepage="https://catalog.ldc.upenn.edu/LDC2011T03", citation="Weischedel, Ralph, et al. OntoNotes Release 4.0 LDC2011T03. Web Download. Philadelphia: Linguistic Data Consortium, 2011.", ) def _split_generators(self, dl_manager): loaded_files = dl_manager.download_and_extract(self.config.data_files) return [ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": loaded_files["train"]}), datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": loaded_files["test"]}), datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": loaded_files["val"]}) ] def _generate_examples(self, filepath): logger.info("generating examples from = %s", filepath) with open(filepath, encoding="cp1252") as f: data = csv.reader(f, delimiter="\t") ids = list() tokens = list() pos_tags = list() for id_, line in enumerate(data): #print(line) if len(line) == 1: if tokens: yield id_, {"id": ids, "tokens": tokens, "pos_tags": pos_tags} ids = list() tokens = list() pos_tags = list() else: ids.append(line[0]) tokens.append(line[1]) pos_tags.append(line[2]) # last example yield id_, {"id": ids, "tokens": tokens, "pos_tags": pos_tags} def main(): dataset = datasets.load_dataset( "data_loading.py", data_files={ "train": "train.tsv", "test": "test.tsv", "val": "val.tsv" } ) #print(dataset) if __name__=="__main__": main() ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2000/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2000/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1999
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1999/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1999/comments
https://api.github.com/repos/huggingface/datasets/issues/1999/events
https://github.com/huggingface/datasets/pull/1999
823,753,591
MDExOlB1bGxSZXF1ZXN0NTg2MTM5ODMy
1,999
Add FashionMNIST dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @lhoestq,\r\n\r\nI have added the changes from the review." ]
2021-03-06T21:36:57
2021-03-09T09:52:11
2021-03-09T09:52:11
CONTRIBUTOR
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1999", "html_url": "https://github.com/huggingface/datasets/pull/1999", "diff_url": "https://github.com/huggingface/datasets/pull/1999.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1999.patch", "merged_at": "2021-03-09T09:52:11" }
This PR adds [FashionMNIST](https://github.com/zalandoresearch/fashion-mnist) dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1999/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1999/timeline
null
null
true