url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
48
51
id
int64
600M
3.43B
node_id
stringlengths
18
24
number
int64
2
7.78k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
listlengths
0
30
created_at
stringdate
2020-04-14 18:18:51
2025-09-18 08:25:34
updated_at
stringdate
2020-04-29 09:23:05
2025-09-22 08:47:53
closed_at
stringlengths
20
20
author_association
stringclasses
4 values
type
null
active_lock_reason
null
draft
bool
0 classes
pull_request
dict
body
stringlengths
0
228k
closed_by
dict
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
4 values
sub_issues_summary
dict
issue_dependencies_summary
dict
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/4502
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4502/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4502/comments
https://api.github.com/repos/huggingface/datasets/issues/4502/events
https://github.com/huggingface/datasets/issues/4502
1,272,353,700
I_kwDODunzps5L1pOk
4,502
Logic bug in arrow_writer?
{ "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "events_url": "https://api.github.com/users/changjonathanc/events{/privacy}", "followers_url": "https://api.github.com/users/changjonathanc/followers", "following_url": "https://api.github.com/users/changjonathanc/following{/other_user}", "gists_url": "https://api.github.com/users/changjonathanc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/changjonathanc", "id": 31893406, "login": "changjonathanc", "node_id": "MDQ6VXNlcjMxODkzNDA2", "organizations_url": "https://api.github.com/users/changjonathanc/orgs", "received_events_url": "https://api.github.com/users/changjonathanc/received_events", "repos_url": "https://api.github.com/users/changjonathanc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/changjonathanc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/changjonathanc/subscriptions", "type": "User", "url": "https://api.github.com/users/changjonathanc", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi @cccntu you're right, as when `batch_examples={}` the current if-statement won't be triggered as the condition won't be satisfied, I'll prepare a PR to address it as well as add the regression tests so that this issue is handled properly.", "Hi @alvarobartt ,\r\nThanks for answering. Do you know when and why ...
2022-06-15T14:50:00Z
2022-06-18T15:15:51Z
2022-06-18T15:15:51Z
CONTRIBUTOR
null
null
null
null
https://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L475-L488 I got some error, and I found it's caused by `batch_examples` being `{}`. I wonder if the code should be as follows: ``` - if batch_examples and len(next(iter(batch_examples.values()))) == 0: + if not batch_examples or len(next(iter(batch_examples.values()))) == 0: return ``` @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "events_url": "https://api.github.com/users/changjonathanc/events{/privacy}", "followers_url": "https://api.github.com/users/changjonathanc/followers", "following_url": "https://api.github.com/users/changjonathanc/following{/other_user}", "gists_url": "https://api.github.com/users/changjonathanc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/changjonathanc", "id": 31893406, "login": "changjonathanc", "node_id": "MDQ6VXNlcjMxODkzNDA2", "organizations_url": "https://api.github.com/users/changjonathanc/orgs", "received_events_url": "https://api.github.com/users/changjonathanc/received_events", "repos_url": "https://api.github.com/users/changjonathanc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/changjonathanc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/changjonathanc/subscriptions", "type": "User", "url": "https://api.github.com/users/changjonathanc", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4502/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4502/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4498
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4498/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4498/comments
https://api.github.com/repos/huggingface/datasets/issues/4498/events
https://github.com/huggingface/datasets/issues/4498
1,272,100,549
I_kwDODunzps5L0rbF
4,498
WER and CER > 1
{ "avatar_url": "https://avatars.githubusercontent.com/u/43045767?v=4", "events_url": "https://api.github.com/users/sadrasabouri/events{/privacy}", "followers_url": "https://api.github.com/users/sadrasabouri/followers", "following_url": "https://api.github.com/users/sadrasabouri/following{/other_user}", "gists_url": "https://api.github.com/users/sadrasabouri/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sadrasabouri", "id": 43045767, "login": "sadrasabouri", "node_id": "MDQ6VXNlcjQzMDQ1NzY3", "organizations_url": "https://api.github.com/users/sadrasabouri/orgs", "received_events_url": "https://api.github.com/users/sadrasabouri/received_events", "repos_url": "https://api.github.com/users/sadrasabouri/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sadrasabouri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sadrasabouri/subscriptions", "type": "User", "url": "https://api.github.com/users/sadrasabouri", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "WER can have values bigger than 1.0, this is expected when there are too many insertions\r\n\r\nFrom [wikipedia](https://en.wikipedia.org/wiki/Word_error_rate):\r\n> Note that since N is the number of words in the reference, the word error rate can be larger than 1.0" ]
2022-06-15T11:35:12Z
2022-06-15T16:38:05Z
2022-06-15T16:38:05Z
NONE
null
null
null
null
## Describe the bug It seems that in some cases in which the `prediction` is longer than the `reference` we may have word/character error rate higher than 1 which is a bit odd. If it's a real bug I think I can solve it with a PR changing [this](https://github.com/huggingface/datasets/blob/master/metrics/wer/wer.py#L105) line to ```python return min(incorrect / total, 1.0) ``` ## Steps to reproduce the bug ```python from datasets import load_metric wer = load_metric("wer") wer_value = wer.compute(predictions=["Hi World vka"], references=["Hello"]) print(wer_value) ``` ## Expected results ``` 1.0 ``` ## Actual results ``` 3.0 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/43045767?v=4", "events_url": "https://api.github.com/users/sadrasabouri/events{/privacy}", "followers_url": "https://api.github.com/users/sadrasabouri/followers", "following_url": "https://api.github.com/users/sadrasabouri/following{/other_user}", "gists_url": "https://api.github.com/users/sadrasabouri/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sadrasabouri", "id": 43045767, "login": "sadrasabouri", "node_id": "MDQ6VXNlcjQzMDQ1NzY3", "organizations_url": "https://api.github.com/users/sadrasabouri/orgs", "received_events_url": "https://api.github.com/users/sadrasabouri/received_events", "repos_url": "https://api.github.com/users/sadrasabouri/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sadrasabouri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sadrasabouri/subscriptions", "type": "User", "url": "https://api.github.com/users/sadrasabouri", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4498/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4498/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4494
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4494/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4494/comments
https://api.github.com/repos/huggingface/datasets/issues/4494/events
https://github.com/huggingface/datasets/issues/4494
1,271,850,599
I_kwDODunzps5LzuZn
4,494
Patching fails for modules that are not installed or don't exist
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2022-06-15T08:17:29Z
2022-06-15T08:54:09Z
2022-06-15T08:54:09Z
MEMBER
null
null
null
null
Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true When trying to patch `scipy.io.loadmat`: ```python ModuleNotFoundError: No module named 'scipy' ``` Instead it shouldn't raise an error and do nothing We use patching to extend such functions to support remote URLs and work in streaming mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4494/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4494/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4491
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4491/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4491/comments
https://api.github.com/repos/huggingface/datasets/issues/4491/events
https://github.com/huggingface/datasets/issues/4491
1,270,803,822
I_kwDODunzps5Lvu1u
4,491
Dataset Viewer issue for Pavithree/test
{ "avatar_url": "https://avatars.githubusercontent.com/u/23344465?v=4", "events_url": "https://api.github.com/users/Pavithree/events{/privacy}", "followers_url": "https://api.github.com/users/Pavithree/followers", "following_url": "https://api.github.com/users/Pavithree/following{/other_user}", "gists_url": "https://api.github.com/users/Pavithree/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Pavithree", "id": 23344465, "login": "Pavithree", "node_id": "MDQ6VXNlcjIzMzQ0NDY1", "organizations_url": "https://api.github.com/users/Pavithree/orgs", "received_events_url": "https://api.github.com/users/Pavithree/received_events", "repos_url": "https://api.github.com/users/Pavithree/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Pavithree/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Pavithree/subscriptions", "type": "User", "url": "https://api.github.com/users/Pavithree", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url...
null
[ "This issue can be resolved according to this post https://stackoverflow.com/questions/70566660/parquet-with-null-columns-on-pyarrow. It looks like first data entry in the json file must not have any null values as pyarrow uses this first file to infer schema for entire dataset." ]
2022-06-14T13:23:10Z
2022-06-14T14:37:21Z
2022-06-14T14:34:33Z
NONE
null
null
null
null
### Link https://huggingface.co/datasets/Pavithree/test ### Description I have extracted the subset of original eli5 dataset found at hugging face. However, while loading the dataset It throws ArrowNotImplementedError: Unsupported cast from string to null using function cast_null error. Is there anything missing from my end? Kindly help. ### Owner _No response_
{ "avatar_url": "https://avatars.githubusercontent.com/u/23344465?v=4", "events_url": "https://api.github.com/users/Pavithree/events{/privacy}", "followers_url": "https://api.github.com/users/Pavithree/followers", "following_url": "https://api.github.com/users/Pavithree/following{/other_user}", "gists_url": "https://api.github.com/users/Pavithree/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Pavithree", "id": 23344465, "login": "Pavithree", "node_id": "MDQ6VXNlcjIzMzQ0NDY1", "organizations_url": "https://api.github.com/users/Pavithree/orgs", "received_events_url": "https://api.github.com/users/Pavithree/received_events", "repos_url": "https://api.github.com/users/Pavithree/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Pavithree/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Pavithree/subscriptions", "type": "User", "url": "https://api.github.com/users/Pavithree", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4491/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4491/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4490
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4490/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4490/comments
https://api.github.com/repos/huggingface/datasets/issues/4490/events
https://github.com/huggingface/datasets/issues/4490
1,270,719,074
I_kwDODunzps5LvaJi
4,490
Use `torch.nested_tensor` for arrays of varying length in torch formatter
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "What's the current behavior?", "Currently, we return a list of Torch tensors if their shapes don't match. If they do, we consolidate them into a single Torch tensor." ]
2022-06-14T12:19:40Z
2023-07-07T13:02:58Z
null
COLLABORATOR
null
null
null
null
Use `torch.nested_tensor` for arrays of varying length in `TorchFormatter`. The PyTorch API of nested tensors is in the prototype stage, so wait for it to become more mature.
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4490/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4490/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4483
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4483/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4483/comments
https://api.github.com/repos/huggingface/datasets/issues/4483/events
https://github.com/huggingface/datasets/issues/4483
1,269,253,840
I_kwDODunzps5Lp0bQ
4,483
Dataset.map throws pyarrow.lib.ArrowNotImplementedError when converting from list of empty lists
{ "avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4", "events_url": "https://api.github.com/users/sanderland/events{/privacy}", "followers_url": "https://api.github.com/users/sanderland/followers", "following_url": "https://api.github.com/users/sanderland/following{/other_user}", "gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sanderland", "id": 48946947, "login": "sanderland", "node_id": "MDQ6VXNlcjQ4OTQ2OTQ3", "organizations_url": "https://api.github.com/users/sanderland/orgs", "received_events_url": "https://api.github.com/users/sanderland/received_events", "repos_url": "https://api.github.com/users/sanderland/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanderland/subscriptions", "type": "User", "url": "https://api.github.com/users/sanderland", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists...
null
[ "Hi @sanderland ! Thanks for reporting :) This is a bug, I opened a PR to fix it. We'll do a new release soon\r\n\r\nIn the meantime you can fix it by specifying in advance that the \"label\" are integers:\r\n```python\r\nimport numpy as np\r\n\r\nds = Dataset.from_dict(\r\n {\r\n \"text\": [\"the lazy do...
2022-06-13T10:47:52Z
2022-06-14T13:34:14Z
2022-06-14T13:34:14Z
CONTRIBUTOR
null
null
null
null
## Describe the bug Dataset.map throws pyarrow.lib.ArrowNotImplementedError: Unsupported cast from int64 to null using function cast_null when converting from a type of 'empty lists' to 'lists with some type'. This appears to be due to the interaction of arrow internals and some assumptions made by datasets. The bug appeared when binarizing some labels, and then adding a dataset which had all these labels absent (to force the model to not label empty strings such with anything) Particularly the fact that this only happens in batched mode is strange. ## Steps to reproduce the bug ```python import numpy as np ds = Dataset.from_dict( { "text": ["the lazy dog jumps over the quick fox", "another sentence"], "label": [[], []], } ) def mapper(features): features['label'] = [ [0,0,0] for l in features['label'] ] return features ds_mapped = ds.map(mapper,batched=True) ``` ## Expected results Not crashing ## Actual results ``` ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2346: in map return self._map_single( ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:532: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:499: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/fingerprint.py:458: in wrapper out = func(self, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2751: in _map_single writer.write_batch(batch) ../.venv/lib/python3.8/site-packages/datasets/arrow_writer.py:503: in write_batch arrays.append(pa.array(typed_sequence)) pyarrow/array.pxi:230: in pyarrow.lib.array ??? pyarrow/array.pxi:110: in pyarrow.lib._handle_arrow_array_protocol ??? ../.venv/lib/python3.8/site-packages/datasets/arrow_writer.py:198: in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) ../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper return func(array, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/table.py:1812: in cast_array_to_feature casted_values = _c(array.values, feature.feature) ../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper return func(array, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/table.py:1843: in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) ../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper return func(array, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/table.py:1752: in array_cast return array.cast(pa_type) pyarrow/array.pxi:915: in pyarrow.lib.Array.cast ??? ../.venv/lib/python3.8/site-packages/pyarrow/compute.py:376: in cast return call_function("cast", [arr], options) pyarrow/_compute.pyx:542: in pyarrow._compute.call_function ??? pyarrow/_compute.pyx:341: in pyarrow._compute.Function.call ??? pyarrow/error.pxi:144: in pyarrow.lib.pyarrow_internal_check_status ??? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > ??? E pyarrow.lib.ArrowNotImplementedError: Unsupported cast from int64 to null using function cast_null pyarrow/error.pxi:121: ArrowNotImplementedError ``` ## Workarounds * Not using batched=True * Using an np.array([],dtype=float) or similar instead of [] in the input * Naming the output column differently from the input column ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2 - Platform: Ubuntu - Python version: 3.8 - PyArrow version: 8.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4483/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4483/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4480
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4480/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4480/comments
https://api.github.com/repos/huggingface/datasets/issues/4480/events
https://github.com/huggingface/datasets/issues/4480
1,268,921,567
I_kwDODunzps5LojTf
4,480
Bigbench tensorflow GPU dependency
{ "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cceyda", "id": 15624271, "login": "cceyda", "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "organizations_url": "https://api.github.com/users/cceyda/orgs", "received_events_url": "https://api.github.com/users/cceyda/received_events", "repos_url": "https://api.github.com/users/cceyda/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "type": "User", "url": "https://api.github.com/users/cceyda", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Thanks for reporting ! :) cc @andersjohanandreassen can you take a look at this ?\r\n\r\nAlso @cceyda feel free to open an issue at [BIG-Bench](https://github.com/google/BIG-bench) as well regarding the `AttributeError`", "I'm on vacation for the next week, so won't be able to do much debugging at the moment. So...
2022-06-13T05:24:06Z
2022-06-14T19:45:24Z
2022-06-14T19:45:23Z
CONTRIBUTOR
null
null
null
null
## Describe the bug Loading bigbech ```py from datasets import load_dataset dataset = load_dataset("bigbench","swedish_to_german_proverbs") ``` tries to use gpu and fails with OOM with the following error ``` Downloading and preparing dataset bigbench/swedish_to_german_proverbs (download: Unknown size, generated: 68.92 KiB, post-processed: Unknown size, total: 68.92 KiB) to /home/ceyda/.cache/huggingface/datasets/bigbench/swedish_to_german_proverbs/1.0.0/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0... Generating default split: 0%| | 0/72 [00:00<?, ? examples/s]2022-06-13 14:11:04.154469: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-06-13 14:11:05.133600: F tensorflow/core/platform/statusor.cc:33] Attempting to fetch value instead of handling error INTERNAL: failed initializing StreamExecutor for CUDA device ordinal 3: INTERNAL: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_OUT_OF_MEMORY: out of memory; total memory reported: 25396838400 Aborted (core dumped) ``` I think this is because bigbench dependency (below) installs tensorflow (GPU version) and dataloading tries to use GPU as default. `pip install bigbench@https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz` while just doing 'pip install bigbench' results in following error ``` File "/home/ceyda/.local/lib/python3.7/site-packages/datasets/load.py", line 109, in import_main_class module = importlib.import_module(module_path) File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/ceyda/.cache/huggingface/modules/datasets_modules/datasets/bigbench/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0/bigbench.py", line 118, in <module> class Bigbench(datasets.GeneratorBasedBuilder): File "/home/ceyda/.cache/huggingface/modules/datasets_modules/datasets/bigbench/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0/bigbench.py", line 127, in Bigbench BigBenchConfig(name=name, version=datasets.Version("1.0.0")) for name in bb_utils.get_all_json_task_names() AttributeError: module 'bigbench.api.util' has no attribute 'get_all_json_task_names' ``` ## Steps to avoid the bug Not ideal but can solve with (since I don't really use tensorflow elsewhere) `pip uninstall tensorflow` `pip install tensorflow-cpu` ## Environment info - datasets @ master - Python version: 3.7
{ "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cceyda", "id": 15624271, "login": "cceyda", "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "organizations_url": "https://api.github.com/users/cceyda/orgs", "received_events_url": "https://api.github.com/users/cceyda/received_events", "repos_url": "https://api.github.com/users/cceyda/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "type": "User", "url": "https://api.github.com/users/cceyda", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4480/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4480/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4478
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4478/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4478/comments
https://api.github.com/repos/huggingface/datasets/issues/4478/events
https://github.com/huggingface/datasets/issues/4478
1,268,358,213
I_kwDODunzps5LmZxF
4,478
Dataset slow during model training
{ "avatar_url": "https://avatars.githubusercontent.com/u/9555494?v=4", "events_url": "https://api.github.com/users/lehrig/events{/privacy}", "followers_url": "https://api.github.com/users/lehrig/followers", "following_url": "https://api.github.com/users/lehrig/following{/other_user}", "gists_url": "https://api.github.com/users/lehrig/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lehrig", "id": 9555494, "login": "lehrig", "node_id": "MDQ6VXNlcjk1NTU0OTQ=", "organizations_url": "https://api.github.com/users/lehrig/orgs", "received_events_url": "https://api.github.com/users/lehrig/received_events", "repos_url": "https://api.github.com/users/lehrig/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lehrig/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lehrig/subscriptions", "type": "User", "url": "https://api.github.com/users/lehrig", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Hi ! cc @Rocketknight1 maybe you know better ?\r\n\r\nI'm not too familiar with `tf.data.experimental.save`. Note that `datasets` uses memory mapping, so depending on your hardware and the disk you are using you can expect performance differences with a dataset loaded in RAM", "Hi @lehrig, I suspect what's happe...
2022-06-11T19:40:19Z
2022-06-14T12:04:31Z
null
NONE
null
null
null
null
## Describe the bug While migrating towards 🤗 Datasets, I encountered an odd performance degradation: training suddenly slows down dramatically. I train with an image dataset using Keras and execute a `to_tf_dataset` just before training. First, I have optimized my dataset following https://discuss.huggingface.co/t/solved-image-dataset-seems-slow-for-larger-image-size/10960/6, which actually improved the situation from what I had before but did not completely solve it. Second, I saved and loaded my dataset using `tf.data.experimental.save` and `tf.data.experimental.load` before training (for which I would have expected no performance change). However, I ended up with the performance I had before tinkering with 🤗 Datasets. Any idea what's the reason for this and how to speed-up training with 🤗 Datasets? ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset import os dataset_dir = "./dataset" prep_dataset_dir = "./prepdataset" model_dir = "./model" # Load Data dataset = load_dataset("Lehrig/Monkey-Species-Collection", "downsized") def read_image_file(example): with open(example["image"].filename, "rb") as f: example["image"] = {"bytes": f.read()} return example dataset = dataset.map(read_image_file) dataset.save_to_disk(dataset_dir) # Preprocess from datasets import ( Array3D, DatasetDict, Features, load_from_disk, Sequence, Value ) import numpy as np from transformers import ImageFeatureExtractionMixin dataset = load_from_disk(dataset_dir) num_classes = dataset["train"].features["label"].num_classes one_hot_matrix = np.eye(num_classes) feature_extractor = ImageFeatureExtractionMixin() def to_pixels(image): image = feature_extractor.resize(image, size=size) image = feature_extractor.to_numpy_array(image, channel_first=False) image = image / 255.0 return image def process(examples): examples["pixel_values"] = [ to_pixels(image) for image in examples["image"] ] examples["label"] = [ one_hot_matrix[label] for label in examples["label"] ] return examples features = Features({ "pixel_values": Array3D(dtype="float32", shape=(size, size, 3)), "label": Sequence(feature=Value(dtype="int32"), length=num_classes) }) prep_dataset = dataset.map( process, remove_columns=["image"], batched=True, batch_size=batch_size, num_proc=2, features=features, ) prep_dataset = prep_dataset.with_format("numpy") # Split train_dev_dataset = prep_dataset['test'].train_test_split( test_size=test_size, shuffle=True, seed=seed ) train_dev_test_dataset = DatasetDict({ 'train': train_dev_dataset['train'], 'dev': train_dev_dataset['test'], 'test': prep_dataset['test'], }) train_dev_test_dataset.save_to_disk(prep_dataset_dir) # Train Model import datetime import tensorflow as tf from tensorflow.keras import Sequential from tensorflow.keras.applications import InceptionV3 from tensorflow.keras.layers import Dense, Dropout, GlobalAveragePooling2D, BatchNormalization from tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping from transformers import DefaultDataCollator dataset = load_from_disk(prep_data_dir) data_collator = DefaultDataCollator(return_tensors="tf") train_dataset = dataset["train"].to_tf_dataset( columns=['pixel_values'], label_cols=['label'], shuffle=True, batch_size=batch_size, collate_fn=data_collator ) validation_dataset = dataset["dev"].to_tf_dataset( columns=['pixel_values'], label_cols=['label'], shuffle=False, batch_size=batch_size, collate_fn=data_collator ) print(f'{datetime.datetime.now()} - Saving Data') tf.data.experimental.save(train_dataset, model_dir+"/train") tf.data.experimental.save(validation_dataset, model_dir+"/val") print(f'{datetime.datetime.now()} - Loading Data') train_dataset = tf.data.experimental.load(model_dir+"/train") validation_dataset = tf.data.experimental.load(model_dir+"/val") shape = np.shape(dataset["train"][0]["pixel_values"]) backbone = InceptionV3( include_top=False, weights='imagenet', input_shape=shape ) for layer in backbone.layers: layer.trainable = False model = Sequential() model.add(backbone) model.add(GlobalAveragePooling2D()) model.add(Dense(128, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.3)) model.add(Dense(64, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.3)) model.add(Dense(10, activation='softmax')) model.compile( optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'] ) print(model.summary()) earlyStopping = EarlyStopping( monitor='val_loss', patience=10, verbose=0, mode='min' ) mcp_save = ModelCheckpoint( f'{model_dir}/best_model.hdf5', save_best_only=True, monitor='val_loss', mode='min' ) reduce_lr_loss = ReduceLROnPlateau( monitor='val_loss', factor=0.1, patience=7, verbose=1, min_delta=0.0001, mode='min' ) hist = model.fit( train_dataset, epochs=epochs, validation_data=validation_dataset, callbacks=[earlyStopping, mcp_save, reduce_lr_loss] ) ``` ## Expected results Same performance when training without my "save/load hack" or a good explanation/recommendation about the issue. ## Actual results Performance slower without my "save/load hack". **Epoch Breakdown (without my "save/load hack"):** - Epoch 1/10 41s 2s/step - loss: 1.6302 - accuracy: 0.5048 - val_loss: 1.4713 - val_accuracy: 0.3273 - lr: 0.0010 - Epoch 2/10 32s 2s/step - loss: 0.5357 - accuracy: 0.8510 - val_loss: 1.0447 - val_accuracy: 0.5818 - lr: 0.0010 - Epoch 3/10 36s 3s/step - loss: 0.3547 - accuracy: 0.9231 - val_loss: 0.6245 - val_accuracy: 0.7091 - lr: 0.0010 - Epoch 4/10 36s 3s/step - loss: 0.2721 - accuracy: 0.9231 - val_loss: 0.3395 - val_accuracy: 0.9091 - lr: 0.0010 - Epoch 5/10 32s 2s/step - loss: 0.1676 - accuracy: 0.9856 - val_loss: 0.2187 - val_accuracy: 0.9636 - lr: 0.0010 - Epoch 6/10 42s 3s/step - loss: 0.2066 - accuracy: 0.9615 - val_loss: 0.1635 - val_accuracy: 0.9636 - lr: 0.0010 - Epoch 7/10 32s 2s/step - loss: 0.1814 - accuracy: 0.9423 - val_loss: 0.1418 - val_accuracy: 0.9636 - lr: 0.0010 - Epoch 8/10 32s 2s/step - loss: 0.1301 - accuracy: 0.9856 - val_loss: 0.1388 - val_accuracy: 0.9818 - lr: 0.0010 - Epoch 9/10 loss: 0.1102 - accuracy: 0.9856 - val_loss: 0.1185 - val_accuracy: 0.9818 - lr: 0.0010 - Epoch 10/10 32s 2s/step - loss: 0.1013 - accuracy: 0.9808 - val_loss: 0.0978 - val_accuracy: 0.9818 - lr: 0.0010 **Epoch Breakdown (with my "save/load hack"):** - Epoch 1/10 13s 625ms/step - loss: 3.0478 - accuracy: 0.1146 - val_loss: 2.3061 - val_accuracy: 0.0727 - lr: 0.0010 - Epoch 2/10 0s 80ms/step - loss: 2.3105 - accuracy: 0.2656 - val_loss: 2.3085 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 3/10 0s 77ms/step - loss: 1.8608 - accuracy: 0.3542 - val_loss: 2.3130 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 4/10 1s 98ms/step - loss: 1.8677 - accuracy: 0.3750 - val_loss: 2.3157 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 5/10 1s 204ms/step - loss: 1.5561 - accuracy: 0.4583 - val_loss: 2.3049 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 6/10 1s 210ms/step - loss: 1.4657 - accuracy: 0.4896 - val_loss: 2.2944 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 7/10 1s 205ms/step - loss: 1.4018 - accuracy: 0.5312 - val_loss: 2.2917 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 8/10 1s 207ms/step - loss: 1.2370 - accuracy: 0.5729 - val_loss: 2.2814 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 9/10 1s 214ms/step - loss: 1.1190 - accuracy: 0.6250 - val_loss: 2.2733 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 10/10 1s 207ms/step - loss: 1.1484 - accuracy: 0.6302 - val_loss: 2.2624 - val_accuracy: 0.0909 - lr: 0.0010 ## Environment info - `datasets` version: 2.2.2 - Platform: Linux-4.18.0-305.45.1.el8_4.ppc64le-ppc64le-with-glibc2.17 - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2 - TensorFlow: 2.8.0 - GPU (used during training): Tesla V100-SXM2-32GB
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4478/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4478/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4477
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4477/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4477/comments
https://api.github.com/repos/huggingface/datasets/issues/4477/events
https://github.com/huggingface/datasets/issues/4477
1,268,308,986
I_kwDODunzps5LmNv6
4,477
Dataset Viewer issue for fgrezes/WIESP2022-NER
{ "avatar_url": "https://avatars.githubusercontent.com/u/42551754?v=4", "events_url": "https://api.github.com/users/AshTayade/events{/privacy}", "followers_url": "https://api.github.com/users/AshTayade/followers", "following_url": "https://api.github.com/users/AshTayade/following{/other_user}", "gists_url": "https://api.github.com/users/AshTayade/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AshTayade", "id": 42551754, "login": "AshTayade", "node_id": "MDQ6VXNlcjQyNTUxNzU0", "organizations_url": "https://api.github.com/users/AshTayade/orgs", "received_events_url": "https://api.github.com/users/AshTayade/received_events", "repos_url": "https://api.github.com/users/AshTayade/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AshTayade/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AshTayade/subscriptions", "type": "User", "url": "https://api.github.com/users/AshTayade", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists...
null
[ "https://huggingface.co/datasets/fgrezes/WIESP2022-NER\r\n\r\nThe error:\r\n\r\n```\r\nMessage: Couldn't find a dataset script at /src/services/worker/fgrezes/WIESP2022-NER/WIESP2022-NER.py or any data file in the same directory. Couldn't find 'fgrezes/WIESP2022-NER' on the Hugging Face Hub either: FileNotFou...
2022-06-11T15:49:17Z
2022-07-18T13:07:33Z
2022-07-18T13:07:33Z
NONE
null
null
null
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4477/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4477/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4476
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4476/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4476/comments
https://api.github.com/repos/huggingface/datasets/issues/4476/events
https://github.com/huggingface/datasets/issues/4476
1,267,987,499
I_kwDODunzps5Lk_Qr
4,476
`to_pandas` doesn't take into account format.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Thanks for opening a discussion :)\r\n\r\nNote that you can use `.remove_columns(...)` to keep only the ones you're interested in before calling `.to_pandas()`", "Yes I can do that thank you!\r\n\r\nDo you think that conceptually my example should work? If not, I'm happy to close this issue. \r\n\r\nIf yes, I ca...
2022-06-10T20:25:31Z
2022-06-15T17:41:41Z
2022-06-15T17:41:41Z
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** I have a large dataset that I need to convert part of to pandas to do some further analysis. Calling `to_pandas` directly on it is expensive. So I thought I could simply select the columns that I want and then call `to_pandas`. **Describe the solution you'd like** ```python from datasets import Dataset ds = Dataset.from_dict({'a': [1,2,3], 'b': [5,6,7], 'c': [8,9,10]}) pandas_df = ds.with_format(columns=['a', 'b']).to_pandas() # I would expect `pandas_df` to only include a,b as column. ``` **Describe alternatives you've considered** I could remove all columns that I don't want? But I don't know all of them in advance. **Additional context** I can probably make a PR with some pointers.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4476/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4476/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4471
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4471/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4471/comments
https://api.github.com/repos/huggingface/datasets/issues/4471/events
https://github.com/huggingface/datasets/issues/4471
1,267,475,268
I_kwDODunzps5LjCNE
4,471
CI error with repo lhoestq/_dummy
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "fixed by https://github.com/huggingface/datasets/pull/4472" ]
2022-06-10T12:26:06Z
2022-06-10T13:24:53Z
2022-06-10T13:24:53Z
MEMBER
null
null
null
null
## Describe the bug CI is failing because of repo "lhoestq/_dummy". See: https://app.circleci.com/pipelines/github/huggingface/datasets/12461/workflows/1b040b45-9578-4ab9-8c44-c643c4eb8691/jobs/74269 ``` requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/datasets/lhoestq/_dummy?full=true ``` The repo seems to no longer exist: https://huggingface.co/api/datasets/lhoestq/_dummy ``` error: "Repository not found" ``` CC: @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4471/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4471/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4467
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4467/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4467/comments
https://api.github.com/repos/huggingface/datasets/issues/4467/events
https://github.com/huggingface/datasets/issues/4467
1,266,218,358
I_kwDODunzps5LePV2
4,467
Transcript string 'null' converted to [None] by load_dataset()
{ "avatar_url": "https://avatars.githubusercontent.com/u/1360633?v=4", "events_url": "https://api.github.com/users/mbarnig/events{/privacy}", "followers_url": "https://api.github.com/users/mbarnig/followers", "following_url": "https://api.github.com/users/mbarnig/following{/other_user}", "gists_url": "https://api.github.com/users/mbarnig/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mbarnig", "id": 1360633, "login": "mbarnig", "node_id": "MDQ6VXNlcjEzNjA2MzM=", "organizations_url": "https://api.github.com/users/mbarnig/orgs", "received_events_url": "https://api.github.com/users/mbarnig/received_events", "repos_url": "https://api.github.com/users/mbarnig/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mbarnig/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mbarnig/subscriptions", "type": "User", "url": "https://api.github.com/users/mbarnig", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Hi @mbarnig, thanks for reporting.\r\n\r\nPlease note that is an expected behavior by `pandas` (we use the `pandas` library to parse CSV files): https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html\r\n```\r\nBy default the following values are interpreted as NaN: \r\n‘’, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1...
2022-06-09T14:26:00Z
2023-07-04T02:18:39Z
2022-06-09T16:29:02Z
NONE
null
null
null
null
## Issue I am training a luxembourgish speech-recognition model in Colab with a custom dataset, including a dictionary of luxembourgish words, for example the speaken numbers 0 to 9. When preparing the dataset with the script `ds_train1 = mydataset.map(prepare_dataset)` the following error was issued: ``` ValueError Traceback (most recent call last) <ipython-input-69-1e8f2b37f5bc> in <module>() ----> 1 ds_train = mydataset_train.map(prepare_dataset) 11 frames /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 2450 if not _is_valid_text_input(text): 2451 raise ValueError( -> 2452 "text input must of type str (single example), List[str] (batch or single pretokenized example) " 2453 "or List[List[str]] (batch of pretokenized examples)." 2454 ) ValueError: text input must of type str (single example), List[str] (batch or single pretokenized example) or List[List[str]] (batch of pretokenized examples). ``` Debugging this problem was not easy, all transcriptions in the dataset are correct strings. Finally I discovered that the transcription string 'null' is interpreted as [None] by the `load_dataset()` script. By deleting this row in the dataset the training worked fine. ## Expected result: transcription 'null' interpreted as 'str' instead of 'None'. ## Reproduction Here is the code to reproduce the error with a one-row-dataset. ``` with open("null-test.csv") as f: reader = csv.reader(f) for row in reader: print(row) ``` ['wav_filename', 'wav_filesize', 'transcript'] ['wavs/female/NULL1.wav', '17530', 'null'] ``` dataset = load_dataset('csv', data_files={'train': 'null-test.csv'}) ``` Using custom data configuration default-81ac0c0e27af3514 Downloading and preparing dataset csv/default to /root/.cache/huggingface/datasets/csv/default-81ac0c0e27af3514/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519... Downloading data files: 100% 1/1 [00:00<00:00, 29.55it/s] Extracting data files: 100% 1/1 [00:00<00:00, 23.66it/s] Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/default-81ac0c0e27af3514/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data. 100% 1/1 [00:00<00:00, 25.84it/s] ``` print(dataset['train']['transcript']) ``` [None] ## Environment info ``` !pip install datasets==2.2.2 !pip install transformers==4.19.2 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4467/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4467/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4462
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4462/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4462/comments
https://api.github.com/repos/huggingface/datasets/issues/4462/events
https://github.com/huggingface/datasets/issues/4462
1,265,079,347
I_kwDODunzps5LZ5Qz
4,462
BigBench: NonMatchingSplitsSizesError when passing a dataset configuration parameter
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Why not adding `max_examples` as part of the config name?", "Yup it can also work, and maybe it's simpler this way. Opening a PR to fix bigbench instead of https://github.com/huggingface/datasets/pull/4463", "Hi @lhoestq,\r\n\r\nThank you for taking a look at this issue, and proposing a solution. \r\nUnfortuna...
2022-06-08T17:31:24Z
2022-07-05T07:39:55Z
null
MEMBER
null
null
null
null
As noticed in https://github.com/huggingface/datasets/pull/4125 when a dataset config class has a parameter that reduces the number of examples (e.g. named `max_examples`), then loading the dataset and passing `max_examples` raises `NonMatchingSplitsSizesError`. This is because it will check for expected the number of examples of the config with the same name without taking into account the `max_examples` parameter. This can be fixed by checking the expected number of examples using the **config id** instead of name. Indeed the config id corresponds to the config name + an optional suffix that depends on the config parameters
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4462/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4462/timeline
null
reopened
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4461
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4461/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4461/comments
https://api.github.com/repos/huggingface/datasets/issues/4461/events
https://github.com/huggingface/datasets/issues/4461
1,264,800,451
I_kwDODunzps5LY1LD
4,461
AttributeError: module 'datasets' has no attribute 'load_dataset'
{ "avatar_url": "https://avatars.githubusercontent.com/u/59248970?v=4", "events_url": "https://api.github.com/users/AlexNLP/events{/privacy}", "followers_url": "https://api.github.com/users/AlexNLP/followers", "following_url": "https://api.github.com/users/AlexNLP/following{/other_user}", "gists_url": "https://api.github.com/users/AlexNLP/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AlexNLP", "id": 59248970, "login": "AlexNLP", "node_id": "MDQ6VXNlcjU5MjQ4OTcw", "organizations_url": "https://api.github.com/users/AlexNLP/orgs", "received_events_url": "https://api.github.com/users/AlexNLP/received_events", "repos_url": "https://api.github.com/users/AlexNLP/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AlexNLP/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AlexNLP/subscriptions", "type": "User", "url": "https://api.github.com/users/AlexNLP", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "I'm having the same issue,Can you tell me how to solve it?", "I have the same issue, can you tell me how to solve it? Thanks", "I had a folder named 'datasets' so this is why it can't find the import, it's looking in the wrong place", "@briandw your comment saved my day 👍 " ]
2022-06-08T13:59:20Z
2024-03-25T12:58:29Z
2022-06-08T14:41:00Z
NONE
null
null
null
null
## Describe the bug I have piped install datasets, but this package doesn't have these attributes: load_dataset, load_metric. ## Environment info - `datasets` version: 1.9.0 - Platform: Linux-5.13.0-44-generic-x86_64-with-debian-bullseye-sid - Python version: 3.6.13 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/59248970?v=4", "events_url": "https://api.github.com/users/AlexNLP/events{/privacy}", "followers_url": "https://api.github.com/users/AlexNLP/followers", "following_url": "https://api.github.com/users/AlexNLP/following{/other_user}", "gists_url": "https://api.github.com/users/AlexNLP/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AlexNLP", "id": 59248970, "login": "AlexNLP", "node_id": "MDQ6VXNlcjU5MjQ4OTcw", "organizations_url": "https://api.github.com/users/AlexNLP/orgs", "received_events_url": "https://api.github.com/users/AlexNLP/received_events", "repos_url": "https://api.github.com/users/AlexNLP/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AlexNLP/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AlexNLP/subscriptions", "type": "User", "url": "https://api.github.com/users/AlexNLP", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4461/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4461/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4456
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4456/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4456/comments
https://api.github.com/repos/huggingface/datasets/issues/4456/events
https://github.com/huggingface/datasets/issues/4456
1,263,241,449
I_kwDODunzps5LS4jp
4,456
Workflow for Tabular data
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": fals...
open
false
null
[]
null
[ "I use below to load a dataset:\r\n```\r\ndataset = datasets.load_dataset(\"scikit-learn/auto-mpg\")\r\ndf = pd.DataFrame(dataset[\"train\"])\r\n```\r\nTBH as said, tabular folk split their own dataset, they sometimes have two splits, sometimes three. Maybe somehow avoiding it for tabular datasets might be good for...
2022-06-07T12:48:22Z
2023-03-06T08:53:55Z
null
MEMBER
null
null
null
null
Tabular data are treated very differently than data for NLP, audio, vision, etc. and therefore the worflow for tabular data in `datasets` is not ideal. For example for tabular data, it is common to use pandas/spark/dask to process the data, and then load the data into X and y (X is an array of features and y an array of labels), then train_test_split and finally feed the data to a machine learning model. In `datasets` the workflow is different: we use load_dataset, then map, then train_test_split (if we only have a train split) and we end up with columnar dataset splits, not formatted as X and y. Right now, it is already possible to convert a dataset from and to pandas, but there are still many things that could improve the workflow for tabular data: - be able to load the data into X and y - be able to load a dataset from the output of spark or dask (as far as I know it's usually csv or parquet files on S3/GCS/HDFS etc.) - support "unsplit" datasets explicitly, instead of putting everything in "train" by default cc @adrinjalali @merveenoyan feel free to complete/correct this :) Feel free to also share ideas of APIs that would be super intuitive in your opinion !
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4456/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4456/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4454
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4454/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4454/comments
https://api.github.com/repos/huggingface/datasets/issues/4454/events
https://github.com/huggingface/datasets/issues/4454
1,262,674,973
I_kwDODunzps5LQuQd
4,454
Dataset Viewer issue for Yaxin/SemEval2015
{ "avatar_url": "https://avatars.githubusercontent.com/u/18160852?v=4", "events_url": "https://api.github.com/users/WithYouTo/events{/privacy}", "followers_url": "https://api.github.com/users/WithYouTo/followers", "following_url": "https://api.github.com/users/WithYouTo/following{/other_user}", "gists_url": "https://api.github.com/users/WithYouTo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/WithYouTo", "id": 18160852, "login": "WithYouTo", "node_id": "MDQ6VXNlcjE4MTYwODUy", "organizations_url": "https://api.github.com/users/WithYouTo/orgs", "received_events_url": "https://api.github.com/users/WithYouTo/received_events", "repos_url": "https://api.github.com/users/WithYouTo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/WithYouTo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WithYouTo/subscriptions", "type": "User", "url": "https://api.github.com/users/WithYouTo", "user_view_type": "public" }
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" }, { "color": "E5583E", ...
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url...
null
[ "Closing since it's a duplicate of https://github.com/huggingface/datasets/issues/4453" ]
2022-06-07T03:31:46Z
2022-06-07T11:53:11Z
2022-06-07T11:53:11Z
NONE
null
null
null
null
### Link _No response_ ### Description the link could not visit ### Owner _No response_
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4454/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4454/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4453
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4453/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4453/comments
https://api.github.com/repos/huggingface/datasets/issues/4453/events
https://github.com/huggingface/datasets/issues/4453
1,262,674,105
I_kwDODunzps5LQuC5
4,453
Dataset Viewer issue for Yaxin/SemEval2015
{ "avatar_url": "https://avatars.githubusercontent.com/u/18160852?v=4", "events_url": "https://api.github.com/users/WithYouTo/events{/privacy}", "followers_url": "https://api.github.com/users/WithYouTo/followers", "following_url": "https://api.github.com/users/WithYouTo/following{/other_user}", "gists_url": "https://api.github.com/users/WithYouTo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/WithYouTo", "id": 18160852, "login": "WithYouTo", "node_id": "MDQ6VXNlcjE4MTYwODUy", "organizations_url": "https://api.github.com/users/WithYouTo/orgs", "received_events_url": "https://api.github.com/users/WithYouTo/received_events", "repos_url": "https://api.github.com/users/WithYouTo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/WithYouTo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WithYouTo/subscriptions", "type": "User", "url": "https://api.github.com/users/WithYouTo", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "I understand that the issue is that a remote file (URL) is being loaded as a local file. Right @albertvillanova @lhoestq?\r\n\r\n```\r\nMessage: [Errno 2] No such file or directory: 'https://raw.githubusercontent.com/YaxinCui/ABSADataset/main/SemEval2015Task12Corrected/train/restaurants_train.xml'\r\n```", ...
2022-06-07T03:30:08Z
2022-06-09T08:34:16Z
2022-06-09T08:34:16Z
NONE
null
null
null
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4453/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4453/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4452
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4452/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4452/comments
https://api.github.com/repos/huggingface/datasets/issues/4452/events
https://github.com/huggingface/datasets/issues/4452
1,262,529,654
I_kwDODunzps5LQKx2
4,452
Trying to load FEVER dataset results in NonMatchingChecksumError
{ "avatar_url": "https://avatars.githubusercontent.com/u/5347982?v=4", "events_url": "https://api.github.com/users/santhnm2/events{/privacy}", "followers_url": "https://api.github.com/users/santhnm2/followers", "following_url": "https://api.github.com/users/santhnm2/following{/other_user}", "gists_url": "https://api.github.com/users/santhnm2/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/santhnm2", "id": 5347982, "login": "santhnm2", "node_id": "MDQ6VXNlcjUzNDc5ODI=", "organizations_url": "https://api.github.com/users/santhnm2/orgs", "received_events_url": "https://api.github.com/users/santhnm2/received_events", "repos_url": "https://api.github.com/users/santhnm2/repos", "site_admin": false, "starred_url": "https://api.github.com/users/santhnm2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/santhnm2/subscriptions", "type": "User", "url": "https://api.github.com/users/santhnm2", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Thanks for reporting @santhnm2. We are fixing it.\r\n\r\nData owners updated their URLs recently. We have to align with them, otherwise you do not download anything (that is why ignore_verifications does not work).", "Hello! Is there any update on this? I am having the same issue 6 months later." ]
2022-06-06T23:13:15Z
2022-12-15T13:36:40Z
2022-06-08T07:16:16Z
NONE
null
null
null
null
## Describe the bug Trying to load the `fever` dataset fails with `datasets.utils.info_utils.NonMatchingChecksumError`. I tried with `download_mode="force_redownload"` but that did not fix the error. I also tried with `ignore_verification=True` but then that raised a `json.decoder.JSONDecodeError`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('fever', 'v1.0') # Fails with NonMatchingChecksumError dataset = load_dataset('fever', 'v1.0', download_mode="force_redownload") # Fails with NonMatchingChecksumError dataset = load_dataset('fever', 'v1.0', ignore_verification=True)` # Fails with JSONDecodeError ``` ## Expected results I expect this call to return with no error raised. ## Actual results With `ignore_verification=False`: ``` *** datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://s3-eu-west-1.amazonaws.com/fever.public/train.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_dev.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_dev_public.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_test.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/paper_dev.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/paper_test.jsonl'] ``` With `ignore_verification=True`: ``` *** json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.3.dev0 - Platform: Linux-4.15.0-50-generic-x86_64-with-glibc2.10 - Python version: 3.8.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4452/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4452/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4449
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4449/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4449/comments
https://api.github.com/repos/huggingface/datasets/issues/4449/events
https://github.com/huggingface/datasets/issues/4449
1,261,262,326
I_kwDODunzps5LLVX2
4,449
Rj
{ "avatar_url": "https://avatars.githubusercontent.com/u/87345839?v=4", "events_url": "https://api.github.com/users/Aeckard45/events{/privacy}", "followers_url": "https://api.github.com/users/Aeckard45/followers", "following_url": "https://api.github.com/users/Aeckard45/following{/other_user}", "gists_url": "https://api.github.com/users/Aeckard45/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Aeckard45", "id": 87345839, "login": "Aeckard45", "node_id": "MDQ6VXNlcjg3MzQ1ODM5", "organizations_url": "https://api.github.com/users/Aeckard45/orgs", "received_events_url": "https://api.github.com/users/Aeckard45/received_events", "repos_url": "https://api.github.com/users/Aeckard45/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Aeckard45/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aeckard45/subscriptions", "type": "User", "url": "https://api.github.com/users/Aeckard45", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2022-06-06T02:24:32Z
2022-06-06T15:44:50Z
2022-06-06T15:44:50Z
NONE
null
null
null
null
import android.content.DialogInterface; import android.database.Cursor; import android.os.Bundle; import android.view.View; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.EditText; import android.widget.Toast; import androidx.appcompat.app.AlertDialog; import androidx.appcompat.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { private EditText editTextID; private EditText editTextName; private EditText editTextNum; private String name; private int number; private String ID; private dbHelper db; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); db = new dbHelper(this); editTextID = findViewById(R.id.editText1); editTextName = findViewById(R.id.editText2); editTextNum = findViewById(R.id.editText3); Button buttonSave = findViewById(R.id.button); Button buttonRead = findViewById(R.id.button2); Button buttonUpdate = findViewById(R.id.button3); Button buttonDelete = findViewById(R.id.button4); Button buttonSearch = findViewById(R.id.button5); Button buttonDeleteAll = findViewById(R.id.button6); buttonSave.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { name = editTextName.getText().toString(); String num = editTextNum.getText().toString(); if (name.isEmpty() || num.isEmpty()) { Toast.makeText(MainActivity.this, "Cannot Submit Empty Fields", Toast.LENGTH_SHORT).show(); } else { number = Integer.parseInt(num); try { // Insert Data db.insertData(name, number); // Clear the fields editTextID.getText().clear(); editTextName.getText().clear(); editTextNum.getText().clear(); } catch (Exception e) { e.printStackTrace(); } } } }); buttonRead.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { final ArrayAdapter<String> adapter = new ArrayAdapter<>(MainActivity.this, android.R.layout.simple_list_item_1); String name; String num; String id; try { Cursor cursor = db.readData(); if (cursor != null && cursor.getCount() > 0) { while (cursor.moveToNext()) { id = cursor.getString(0); // get data in column index 0 name = cursor.getString(1); // get data in column index 1 num = cursor.getString(2); // get data in column index 2 // Add SQLite data to listView adapter.add("ID :- " + id + "\n" + "Name :- " + name + "\n" + "Number :- " + num + "\n\n"); } } else { adapter.add("No Data"); } cursor.close(); } catch (Exception e) { e.printStackTrace(); } // show the saved data in alertDialog AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this); builder.setTitle("SQLite saved data"); builder.setIcon(R.mipmap.app_icon_foreground); builder.setAdapter(adapter, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { } }); builder.setPositiveButton("OK", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.cancel(); } }); AlertDialog dialog = builder.create(); dialog.show(); } }); buttonUpdate.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { name = editTextName.getText().toString(); String num = editTextNum.getText().toString(); ID = editTextID.getText().toString(); if (name.isEmpty() || num.isEmpty() || ID.isEmpty()) { Toast.makeText(MainActivity.this, "Cannot Submit Empty Fields", Toast.LENGTH_SHORT).show(); } else { number = Integer.parseInt(num); try { // Update Data db.updateData(ID, name, number); // Clear the fields editTextID.getText().clear(); editTextName.getText().clear(); editTextNum.getText().clear(); } catch (Exception e) { e.printStackTrace(); } } } }); buttonDelete.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { ID = editTextID.getText().toString(); if (ID.isEmpty()) { Toast.makeText(MainActivity.this, "Please enter the ID", Toast.LENGTH_SHORT).show(); } else { try { // Delete Data db.deleteData(ID); // Clear the fields editTextID.getText().clear(); editTextName.getText().clear(); editTextNum.getText().clear(); } catch (Exception e) { e.printStackTrace(); } } } }); buttonDeleteAll.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // Delete all data // You can simply delete all the data by calling this method --> db.deleteAllData(); // You can try this also AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this); builder.setIcon(R.mipmap.app_icon_foreground); builder.setTitle("Delete All Data"); builder.setCancelable(false); builder.setMessage("Do you really need to delete your all data ?"); builder.setPositiveButton("Yes", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { // User confirmed , now you can delete the data db.deleteAllData(); // Clear the fields editTextID.getText().clear(); editTextName.getText().clear(); editTextNum.getText().clear(); } }); builder.setNegativeButton("No", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { // user not confirmed dialog.cancel(); } }); AlertDialog dialog = builder.create(); dialog.show(); } }); buttonSearch.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { ID = editTextID.getText().toString(); if (ID.isEmpty()) { Toast.makeText(MainActivity.this, "Please enter the ID", Toast.LENGTH_SHORT).show(); } else { try { // Search data Cursor cursor = db.searchData(ID); if (cursor.moveToFirst()) { editTextName.setText(cursor.getString(1)); editTextNum.setText(cursor.getString(2)); Toast.makeText(MainActivity.this, "Data successfully searched", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(MainActivity.this, "ID not found", Toast.LENGTH_SHORT).show(); editTextNum.setText("ID Not found"); editTextName.setText("ID not found"); } cursor.close(); } catch (Exception e) { e.printStackTrace(); } } } }); } }
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4449/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4449/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4448
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4448/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4448/comments
https://api.github.com/repos/huggingface/datasets/issues/4448/events
https://github.com/huggingface/datasets/issues/4448
1,260,966,129
I_kwDODunzps5LKNDx
4,448
New Preprocessing Feature - Deduplication [Request]
{ "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yuvalkirstain", "id": 57996478, "login": "yuvalkirstain", "node_id": "MDQ6VXNlcjU3OTk2NDc4", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "type": "User", "url": "https://api.github.com/users/yuvalkirstain", "user_view_type": "public" }
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" }, { "color": "a2eeef", ...
open
false
null
[]
null
[ "Hi! The [datasets_sql](https://github.com/mariosasko/datasets_sql) package lets you easily find distinct rows in a dataset (an example with `SELECT DISTINCT` is in the readme). Deduplication is (still) not part of the official API because it's hard to implement for datasets bigger than RAM while only using the nat...
2022-06-05T05:32:56Z
2023-12-12T07:52:40Z
null
NONE
null
null
null
null
**Is your feature request related to a problem? Please describe.** Many large datasets are full of duplications and it has been shown that deduplicating datasets can lead to better performance while training, and more truthful evaluation at test-time. A feature that allows one to easily deduplicate a dataset can be cool! **Describe the solution you'd like** We can define a function and keep only the first/last data-point that yields the value according to this function. **Describe alternatives you've considered** The clear alternative is to repeat a clear boilerplate every time someone want to deduplicate a dataset.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4448/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4448/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4443
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4443/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4443/comments
https://api.github.com/repos/huggingface/datasets/issues/4443/events
https://github.com/huggingface/datasets/issues/4443
1,259,606,334
I_kwDODunzps5LFBE-
4,443
Dataset Viewer issue for openclimatefix/nimrod-uk-1km
{ "avatar_url": "https://avatars.githubusercontent.com/u/32382826?v=4", "events_url": "https://api.github.com/users/ZYMXIXI/events{/privacy}", "followers_url": "https://api.github.com/users/ZYMXIXI/followers", "following_url": "https://api.github.com/users/ZYMXIXI/following{/other_user}", "gists_url": "https://api.github.com/users/ZYMXIXI/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ZYMXIXI", "id": 32382826, "login": "ZYMXIXI", "node_id": "MDQ6VXNlcjMyMzgyODI2", "organizations_url": "https://api.github.com/users/ZYMXIXI/orgs", "received_events_url": "https://api.github.com/users/ZYMXIXI/received_events", "repos_url": "https://api.github.com/users/ZYMXIXI/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ZYMXIXI/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZYMXIXI/subscriptions", "type": "User", "url": "https://api.github.com/users/ZYMXIXI", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "If I understand correctly, this is due to the key `split` missing in the line https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L41 of the script.\r\nMaybe @albertvillanova could confirm.", "I'm having a look.", "Indeed there are several issues in this dataset loading scri...
2022-06-03T08:17:16Z
2023-09-25T12:15:08Z
null
NONE
null
null
null
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4443/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4443/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4442
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4442/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4442/comments
https://api.github.com/repos/huggingface/datasets/issues/4442/events
https://github.com/huggingface/datasets/issues/4442
1,258,589,276
I_kwDODunzps5LBIxc
4,442
Dataset Viewer issue for amazon_polarity
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url...
null
[ "Thanks, looking at it", "Not sure what happened 😬, but it's fixed" ]
2022-06-02T19:18:38Z
2022-06-07T18:50:37Z
2022-06-07T18:50:37Z
MEMBER
null
null
null
null
### Link https://huggingface.co/datasets/amazon_polarity/viewer/amazon_polarity/test ### Description For some reason the train split is OK but the test split is not for this dataset: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/amazon_polarity/__init__.py' ``` ### Owner No
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4442/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4442/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4441
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4441/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4441/comments
https://api.github.com/repos/huggingface/datasets/issues/4441/events
https://github.com/huggingface/datasets/issues/4441
1,258,568,656
I_kwDODunzps5LBDvQ
4,441
Dataset Viewer issue for aeslc
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url...
null
[ "Not sure what happened 😬, but it's fixed" ]
2022-06-02T18:57:12Z
2022-06-07T18:50:55Z
2022-06-07T18:50:55Z
MEMBER
null
null
null
null
### Link https://huggingface.co/datasets/aeslc ### Description The dataset viewer can't find `dataset_infos.json` in it's cache: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/aeslc/eb8e30234cf984a58ebe9f205674597ac1db2ec91e7321cd7f36864f7e3671b8/dataset_infos.json' ``` ### Owner No
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4441/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4441/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4439
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4439/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4439/comments
https://api.github.com/repos/huggingface/datasets/issues/4439/events
https://github.com/huggingface/datasets/issues/4439
1,258,434,111
I_kwDODunzps5LAi4_
4,439
TIMIT won't load after manual download: Errors about files that don't exist
{ "avatar_url": "https://avatars.githubusercontent.com/u/13925685?v=4", "events_url": "https://api.github.com/users/drscotthawley/events{/privacy}", "followers_url": "https://api.github.com/users/drscotthawley/followers", "following_url": "https://api.github.com/users/drscotthawley/following{/other_user}", "gists_url": "https://api.github.com/users/drscotthawley/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/drscotthawley", "id": 13925685, "login": "drscotthawley", "node_id": "MDQ6VXNlcjEzOTI1Njg1", "organizations_url": "https://api.github.com/users/drscotthawley/orgs", "received_events_url": "https://api.github.com/users/drscotthawley/received_events", "repos_url": "https://api.github.com/users/drscotthawley/repos", "site_admin": false, "starred_url": "https://api.github.com/users/drscotthawley/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/drscotthawley/subscriptions", "type": "User", "url": "https://api.github.com/users/drscotthawley", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "To have some context, please see:\r\n- #4145\r\n\r\nPlease, also note that we have recently made some fixes to the script, which are in our GitHub master branch but not yet released:\r\n- #4422\r\n- #4425 \r\n- #4436", "Thanks Albert! I'll try pulling `datasets` from the git repo instead of PyPI, and/or just wai...
2022-06-02T16:35:56Z
2022-06-03T08:44:17Z
2022-06-03T08:44:16Z
NONE
null
null
null
null
## Describe the bug I get the message from HuggingFace that it must be downloaded manually. From the URL provided in the message, I got to UPenn page for manual download. (UPenn apparently want $250? for the dataset??) ...So, ok, I obtained a copy from a friend and also a smaller version from Kaggle. But in both cases the HF dataloader fails; it is looking for files that don't exist anywhere in the dataset: it is looking for files with lower-case letters like "**test*" (all the filenames in both my copies are uppercase) and certain file extensions that exclude the .DOC which is provided in TIMIT: ## Steps to reproduce the bug ```python data = load_dataset('timit_asr', 'clean')['train'] ``` ## Expected results The dataset should load with no errors. ## Actual results This error message: ``` File "/home/ubuntu/envs/data2vec/lib/python3.9/site-packages/datasets/data_files.py", line 201, in resolve_patterns_locally_or_by_urls raise FileNotFoundError(error_msg) FileNotFoundError: Unable to resolve any data file that matches '['**test*', '**eval*']' at /home/ubuntu/datasets/timit with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'] ``` But this is a strange sort of error: why is it looking for lower-case file names when all the TIMIT dataset filenames are uppercase? Why does it exclude .DOC files when the only parts of the TIMIT data set with "TEST" in them have ".DOC" extensions? ...I wonder, how was anyone able to get this to work in the first place? The files in the dataset look like the following: ``` ³ PHONCODE.DOC ³ PROMPTS.TXT ³ SPKRINFO.TXT ³ SPKRSENT.TXT ³ TESTSET.DOC ``` ...so why are these being excluded by the dataset loader? ## Environment info - `datasets` version: 2.2.2 - Platform: Linux-5.4.0-1060-aws-x86_64-with-glibc2.27 - Python version: 3.9.9 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4439/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4439/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4435
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4435/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4435/comments
https://api.github.com/repos/huggingface/datasets/issues/4435/events
https://github.com/huggingface/datasets/issues/4435
1,257,496,552
I_kwDODunzps5K89_o
4,435
Load a local cached dataset that has been modified
{ "avatar_url": "https://avatars.githubusercontent.com/u/2789441?v=4", "events_url": "https://api.github.com/users/mihail911/events{/privacy}", "followers_url": "https://api.github.com/users/mihail911/followers", "following_url": "https://api.github.com/users/mihail911/following{/other_user}", "gists_url": "https://api.github.com/users/mihail911/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mihail911", "id": 2789441, "login": "mihail911", "node_id": "MDQ6VXNlcjI3ODk0NDE=", "organizations_url": "https://api.github.com/users/mihail911/orgs", "received_events_url": "https://api.github.com/users/mihail911/received_events", "repos_url": "https://api.github.com/users/mihail911/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mihail911/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mihail911/subscriptions", "type": "User", "url": "https://api.github.com/users/mihail911", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi! `datasets` caches every modification/loading, so you can either rerun the pipeline up to the `map` call or use `Dataset.from_file(modified_dataset)` to load the dataset directly from the cache file.", "Awesome, hvala Mario! This works. " ]
2022-06-02T01:51:49Z
2022-06-02T23:59:26Z
2022-06-02T23:59:18Z
NONE
null
null
null
null
## Describe the bug I have loaded a dataset as follows: ``` d = load_dataset("emotion", split="validation") ``` Afterwards I make some modifications to the dataset via a `map` call: ``` d.map(some_update_func, cache_file_name=modified_dataset) ``` This generates a cached version of the dataset on my local system in the same directory as the original download of the data (/path/to/cache). Running an `ls` returns: ``` modified_dataset dataset_info.json emotion-test.arrow emotion-train.arrow emotion-validation.arrow ``` as expected. However, when I try to load up the modified cached dataset via a call to ``` modified = load_dataset("emotion", split="validation", data_files="/path/to/cache/modified_dataset") ``` it simply redownloads a new version of the dataset and dumps to a new cache rather than loading up the original modified dataset: ``` Using custom data configuration validation-cdbf51685638421b Downloading and preparing dataset emotion/validation to ... ``` How am I supposed to load the original modified local cache copy of the dataset? ## Environment info - `datasets` version: 2.2.2 - Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/2789441?v=4", "events_url": "https://api.github.com/users/mihail911/events{/privacy}", "followers_url": "https://api.github.com/users/mihail911/followers", "following_url": "https://api.github.com/users/mihail911/following{/other_user}", "gists_url": "https://api.github.com/users/mihail911/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mihail911", "id": 2789441, "login": "mihail911", "node_id": "MDQ6VXNlcjI3ODk0NDE=", "organizations_url": "https://api.github.com/users/mihail911/orgs", "received_events_url": "https://api.github.com/users/mihail911/received_events", "repos_url": "https://api.github.com/users/mihail911/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mihail911/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mihail911/subscriptions", "type": "User", "url": "https://api.github.com/users/mihail911", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4435/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4435/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4430
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4430/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4430/comments
https://api.github.com/repos/huggingface/datasets/issues/4430/events
https://github.com/huggingface/datasets/issues/4430
1,254,412,591
I_kwDODunzps5KxNEv
4,430
Add ability to load newer, cleaner version of Multi-News
{ "avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4", "events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}", "followers_url": "https://api.github.com/users/JohnGiorgi/followers", "following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}", "gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JohnGiorgi", "id": 8917831, "login": "JohnGiorgi", "node_id": "MDQ6VXNlcjg5MTc4MzE=", "organizations_url": "https://api.github.com/users/JohnGiorgi/orgs", "received_events_url": "https://api.github.com/users/JohnGiorgi/received_events", "repos_url": "https://api.github.com/users/JohnGiorgi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions", "type": "User", "url": "https://api.github.com/users/JohnGiorgi", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi! Our versioning is based on Git revisions (the `revision` param in `load_dataset`), so you can just replace the old URL with the new one and open a PR :). I can also give you some pointers if needed.", "@mariosasko Awesome thanks! I will do that. Looks like this new version of the data is not available as a z...
2022-05-31T21:00:44Z
2022-06-07T17:14:44Z
2022-06-07T17:14:44Z
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** The [Multi-News dataloader points to the original version of the Multi-News dataset](https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/datasets/multi_news/multi_news.py#L47), but this has [known errors in it](https://github.com/Alex-Fabbri/Multi-News/issues/11). There exists a [newer version which fixes some of these issues](https://drive.google.com/open?id=1jwBzXBVv8sfnFrlzPnSUBHEEAbpIUnFq). Unfortunately I don't think you can just replace this old URL with the new one, otherwise this could lead to issues with reproducibility. **Describe the solution you'd like** Add a new version to the Multi-News dataloader that points to the updated dataset which has fixes for some known issues. **Describe alternatives you've considered** Replace the current URL to the original version to the dataset with the URL to the version with fixes. **Additional context** Would be happy to make a PR for this, could someone maybe point me to another dataloader that has multiple versions so I can see how this is handled in `datasets`?
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4430/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4430/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4428
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4428/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4428/comments
https://api.github.com/repos/huggingface/datasets/issues/4428/events
https://github.com/huggingface/datasets/issues/4428
1,254,092,818
I_kwDODunzps5Kv_AS
4,428
Errors when building dummy data if you use nested _URLS
{ "avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4", "events_url": "https://api.github.com/users/silverriver/events{/privacy}", "followers_url": "https://api.github.com/users/silverriver/followers", "following_url": "https://api.github.com/users/silverriver/following{/other_user}", "gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/silverriver", "id": 2529049, "login": "silverriver", "node_id": "MDQ6VXNlcjI1MjkwNDk=", "organizations_url": "https://api.github.com/users/silverriver/orgs", "received_events_url": "https://api.github.com/users/silverriver/received_events", "repos_url": "https://api.github.com/users/silverriver/repos", "site_admin": false, "starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/silverriver/subscriptions", "type": "User", "url": "https://api.github.com/users/silverriver", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2022-05-31T16:10:57Z
2022-06-07T09:24:09Z
2022-06-07T09:24:09Z
CONTRIBUTOR
null
null
null
null
## Describe the bug When making dummy data with the `datasets-cli dummy_data` tool, an error will be raised if you use a nested _URLS in your dataset script. Traceback (most recent call last): File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 43, in <module> main() File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 311, in run self._autogenerate_dummy_data( File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 337, in _autogenerate_dummy_data dataset_builder._split_generators(dl_manager) File "/home/name/.cache/huggingface/modules/datasets_modules/datasets/personal_dialog/559332bced5eeafa7f7efc2a7c10ce02cee2a8116bbab4611c35a50ba2715b77/personal_dialog.py", line 108, in _split_generators data_dir = dl_manager.download_and_extract(urls) File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 56, in download_and_extract dummy_output = self.mock_download_manager.download(url_or_urls) File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 130, in download return self.download_and_extract(data_url) File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 122, in download_and_extract return self.create_dummy_data_dict(dummy_file, data_url) File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 165, in create_dummy_data_dict if isinstance(first_value, str) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()): TypeError: unhashable type: 'list' ## Steps to reproduce the bug You can use my dataset script implemented here: https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py ```python datasets_cli dummy_data datasets/personal_dialog --auto_generate ``` You can change https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py#L54 to ``` "train": "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/dev_random.jsonl.gz" ``` before runing the above script to avoid downloading a large training data. ## Expected results The dummy data should be generated ## Actual results An error is raised. It seems that in https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165 We only check if the first item of dummy_data_dict.values() is str. However, dummy_data_dict.values() may have the type of [str, list, list]. A simple fix would be changing https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165 to ```python if all([isinstance(value, str) for value in dummy_data_dict.values()]) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()): ``` But I don't know if this kinds of change may bring any side effect since I am not sure about the detail logic here. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Linux - Python version: Python 3.9.10 - PyArrow version: 7.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4428/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4428/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4426
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4426/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4426/comments
https://api.github.com/repos/huggingface/datasets/issues/4426/events
https://github.com/huggingface/datasets/issues/4426
1,253,887,311
I_kwDODunzps5KvM1P
4,426
Add loading variable number of columns for different splits
{ "avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4", "events_url": "https://api.github.com/users/DrMatters/events{/privacy}", "followers_url": "https://api.github.com/users/DrMatters/followers", "following_url": "https://api.github.com/users/DrMatters/following{/other_user}", "gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DrMatters", "id": 22641583, "login": "DrMatters", "node_id": "MDQ6VXNlcjIyNjQxNTgz", "organizations_url": "https://api.github.com/users/DrMatters/orgs", "received_events_url": "https://api.github.com/users/DrMatters/received_events", "repos_url": "https://api.github.com/users/DrMatters/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions", "type": "User", "url": "https://api.github.com/users/DrMatters", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi! Indeed the column is missing, but you shouldn't get an error? Have you made some modifications (locally) to the loading script? I've opened a PR to add the missing columns to the script. " ]
2022-05-31T13:40:16Z
2022-06-03T16:25:25Z
2022-06-03T16:25:25Z
NONE
null
null
null
null
**Is your feature request related to a problem? Please describe.** The original dataset `blended_skill_talk` consists of different sets of columns for the different splits: (test/valid) splits have additional data column `label_candidates` that the (train) doesn't have. When loading such data, an exception occurs at table.py:cast_table_to_schema, because of mismatched columns.
{ "avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4", "events_url": "https://api.github.com/users/DrMatters/events{/privacy}", "followers_url": "https://api.github.com/users/DrMatters/followers", "following_url": "https://api.github.com/users/DrMatters/following{/other_user}", "gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/DrMatters", "id": 22641583, "login": "DrMatters", "node_id": "MDQ6VXNlcjIyNjQxNTgz", "organizations_url": "https://api.github.com/users/DrMatters/orgs", "received_events_url": "https://api.github.com/users/DrMatters/received_events", "repos_url": "https://api.github.com/users/DrMatters/repos", "site_admin": false, "starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions", "type": "User", "url": "https://api.github.com/users/DrMatters", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4426/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4426/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4422
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4422/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4422/comments
https://api.github.com/repos/huggingface/datasets/issues/4422/events
https://github.com/huggingface/datasets/issues/4422
1,253,146,511
I_kwDODunzps5KsX-P
4,422
Cannot load timit_asr data set
{ "avatar_url": "https://avatars.githubusercontent.com/u/992795?v=4", "events_url": "https://api.github.com/users/bhaddow/events{/privacy}", "followers_url": "https://api.github.com/users/bhaddow/followers", "following_url": "https://api.github.com/users/bhaddow/following{/other_user}", "gists_url": "https://api.github.com/users/bhaddow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhaddow", "id": 992795, "login": "bhaddow", "node_id": "MDQ6VXNlcjk5Mjc5NQ==", "organizations_url": "https://api.github.com/users/bhaddow/orgs", "received_events_url": "https://api.github.com/users/bhaddow/received_events", "repos_url": "https://api.github.com/users/bhaddow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhaddow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhaddow/subscriptions", "type": "User", "url": "https://api.github.com/users/bhaddow", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Thanks for reporting, @bhaddow.\r\n\r\nI'm fixing it.", "Thanks for the quick fix!", "@bhaddow we have also made a fix so that you don't have to convert to uppercase the file extensions of the LDC data.\r\n\r\nWould you mind checking if it works OK now for you and reporting if there are any issues? Thanks. ", ...
2022-05-30T22:00:22Z
2022-06-02T06:34:05Z
2022-05-31T13:42:31Z
NONE
null
null
null
null
## Describe the bug I am trying to load the timit_asr data set. I have tried with a copy from the LDC, and a copy from deepai. In both cases they fail with a "duplicate key" error. With the LDC version I have to convert the file extensions all to upper-case before I can load it at all. ## Steps to reproduce the bug ```python timit = datasets.load_dataset("timit_asr", data_dir = "/path/to/dataset") # Sample code to reproduce the bug ``` ## Expected results The data set should load without error. It worked for me before the LDC url change. ## Actual results ``` datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: SA1 Keys should be unique and deterministic in nature ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - `datasets` version: 2.2.2 - Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4422/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4422/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4420
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4420/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4420/comments
https://api.github.com/repos/huggingface/datasets/issues/4420/events
https://github.com/huggingface/datasets/issues/4420
1,252,739,239
I_kwDODunzps5Kq0in
4,420
Metric evaluation problems in multi-node, shared file system
{ "avatar_url": "https://avatars.githubusercontent.com/u/40303490?v=4", "events_url": "https://api.github.com/users/gullabi/events{/privacy}", "followers_url": "https://api.github.com/users/gullabi/followers", "following_url": "https://api.github.com/users/gullabi/following{/other_user}", "gists_url": "https://api.github.com/users/gullabi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gullabi", "id": 40303490, "login": "gullabi", "node_id": "MDQ6VXNlcjQwMzAzNDkw", "organizations_url": "https://api.github.com/users/gullabi/orgs", "received_events_url": "https://api.github.com/users/gullabi/received_events", "repos_url": "https://api.github.com/users/gullabi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gullabi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gullabi/subscriptions", "type": "User", "url": "https://api.github.com/users/gullabi", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "If you call `metric.compute` in a distributed setup like yours, then `metric.compute` is called in each process. `metric.compute` first calls `metric.add_batch`, and it looks like your error appears at that stage.\r\n\r\nTo make sure that all the processes have started writing their predictions/references at the s...
2022-05-30T13:24:05Z
2023-07-11T09:33:18Z
2023-07-11T09:33:17Z
NONE
null
null
null
null
## Describe the bug Metric evaluation fails in multi-node within a shared file system, because the master process cannot find the lock files from other nodes. (This issue was originally mentioned in the transformers repo https://github.com/huggingface/transformers/issues/17412) ## Steps to reproduce the bug 1. clone [this huggingface model](https://huggingface.co/PereLluis13/wav2vec2-xls-r-300m-ca-lm) and replace the `run_speech_recognition_ctc.py` script with the version in the gist [here](https://gist.github.com/gullabi/3f66094caa8db1c1e615dd35bd67ec71#file-run_speech_recognition_ctc-py). 2. Setup the `venv` according to the requirements of the model file plus `datasets==2.0.0`, `transformers==4.18.0` and `torch==1.9.0` 3. Launch the runner in a distributed environment which has a shared file system for two nodes, preferably with SLURM. Example [here](https://gist.github.com/gullabi/3f66094caa8db1c1e615dd35bd67ec71) Specifically for the datasets, for the distributed setup the `load_metric` is called as: ``` process_id=int(os.environ["RANK"]) num_process=int(os.environ["WORLD_SIZE"]) eval_metrics = {metric: load_metric(metric, process_id=process_id, num_process=num_process, experiment_id="slurm") for metric in data_args.eval_metrics} ``` ## Expected results The training should not fail, due to the failure of the `Metric.compute()` step. ## Actual results For the test I am executing the world size is 4, with 2 GPUs in 2 nodes. However the process is not finding the necessary lock files ``` File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 841, in <module> main() File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 792, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 1497, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 1624, in _maybe_log_save_evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 2291, in evaluate metric_key_prefix=metric_key_prefix, File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 2535, in evaluation_loop metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels)) File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 742, in compute_metrics metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()} File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 742, in <dictcomp> metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()} File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 419, in compute self.add_batch(**inputs) File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 465, in add_batch self._init_writer() File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 552, in _init_writer self._check_rendez_vous() # wait for master to be ready and to let everyone go File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 342, in _check_rendez_vous ) from None ValueError: Expected to find locked file /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow.lock from process 3 but it doesn't exist. ``` When I look at the cache directory, I can see all the lock files in principle: ``` /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow.lock /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-1.arrow /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-1.arrow.lock /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-2.arrow /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-2.arrow.lock /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-3.arrow /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-3.arrow.lock /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-rdv.lock ``` I see that there was another related issue here https://github.com/huggingface/datasets/issues/1942, but it seems to have resolved via https://github.com/huggingface/datasets/pull/1966. Let me know if there is problem with how I am calling the `load_metric` or whether I need to make changes to the `.compute()` steps. ## Environment info - `datasets` version: 2.0.0 - Platform: Linux-4.18.0-147.8.1.el8_1.x86_64-x86_64-with-centos-8.1.1911-Core - Python version: 3.7.4 - PyArrow version: 7.0.0 - Pandas version: 1.3.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4420/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4420/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4419
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4419/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4419/comments
https://api.github.com/repos/huggingface/datasets/issues/4419/events
https://github.com/huggingface/datasets/issues/4419
1,252,652,896
I_kwDODunzps5Kqfdg
4,419
Update `unittest` assertions over tuples from `assertEqual` to `assertTupleEqual`
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi! If the only goal is to improve readability, it's better to use `assertTupleEqual` than `assertSequenceEqual` for Python tuples. Also, note that this function is called internally by `assertEqual`, but I guess we can accept a PR to be more verbose.", "Hi @mariosasko, right! I'll update the issue title/desc wi...
2022-05-30T12:13:18Z
2022-09-30T16:01:37Z
2022-09-30T16:01:37Z
MEMBER
null
null
null
null
**Is your feature request related to a problem? Please describe.** So this is more a readability improvement rather than a proposal, wouldn't it be better to use `assertTupleEqual` over the tuples rather than `assertEqual`? As `unittest` added that function in `v3.1`, as detailed at https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual, so maybe it's worth updating. Find an example of an `assertEqual` over a tuple in 🤗 `datasets` unit tests over an `ArrowDataset` at https://github.com/huggingface/datasets/blob/0bb47271910c8a0b628dba157988372307fca1d2/tests/test_arrow_dataset.py#L570 **Describe the solution you'd like** Start slowly replacing all the `assertEqual` statements with `assertTupleEqual` if the assertion is done over a Python tuple, as we're doing with the Python lists using `assertListEqual` rather than `assertEqual`. **Additional context** If so, please let me know and I'll try to go over the tests and create a PR if applicable, otherwise, if you consider this should stay as `assertEqual` rather than `assertSequenceEqual` feel free to close this issue! Thanks 🤗
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4419/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4419/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4417
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4417/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4417/comments
https://api.github.com/repos/huggingface/datasets/issues/4417/events
https://github.com/huggingface/datasets/issues/4417
1,251,933,091
I_kwDODunzps5Knvuj
4,417
how to convert a dict generator into a huggingface dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/32235549?v=4", "events_url": "https://api.github.com/users/StephennFernandes/events{/privacy}", "followers_url": "https://api.github.com/users/StephennFernandes/followers", "following_url": "https://api.github.com/users/StephennFernandes/following{/other_user}", "gists_url": "https://api.github.com/users/StephennFernandes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/StephennFernandes", "id": 32235549, "login": "StephennFernandes", "node_id": "MDQ6VXNlcjMyMjM1NTQ5", "organizations_url": "https://api.github.com/users/StephennFernandes/orgs", "received_events_url": "https://api.github.com/users/StephennFernandes/received_events", "repos_url": "https://api.github.com/users/StephennFernandes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/StephennFernandes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StephennFernandes/subscriptions", "type": "User", "url": "https://api.github.com/users/StephennFernandes", "user_view_type": "public" }
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", ...
null
[ "@albertvillanova @lhoestq , could you please help me on this issue. ", "Hi ! As mentioned on the [forum](https://discuss.huggingface.co/t/how-to-wrap-a-generator-with-hf-dataset/18464), the simplest for now would be to define a [dataset script](https://huggingface.co/docs/datasets/dataset_script) which can conta...
2022-05-29T16:28:27Z
2022-09-16T14:44:19Z
2022-09-16T14:44:19Z
NONE
null
null
null
null
### Link _No response_ ### Description Hey there, I have used seqio to get a well distributed mixture of samples from multiple dataset. However the resultant output from seqio is a python generator dict, which I cannot produce back into huggingface dataset. The generator contains all the samples needed for training the model but I cannot convert it into a huggingface dataset. The code looks like this: ``` for ex in seqio_data: print(ex[“text”]) ``` I need to convert the seqio_data (generator) into huggingface dataset. the complete seqio code goes here: ``` import functools import seqio import tensorflow as tf import t5.data from datasets import load_dataset from t5.data import postprocessors from t5.data import preprocessors from t5.evaluation import metrics from seqio import FunctionDataSource, utils TaskRegistry = seqio.TaskRegistry def gen_dataset(split, shuffle=False, seed=None, column="text", dataset_params=None): dataset = load_dataset(**dataset_params) if shuffle: if seed: dataset = dataset.shuffle(seed=seed) else: dataset = dataset.shuffle() while True: for item in dataset[str(split)]: yield item[column] def dataset_fn(split, shuffle_files, seed=None, dataset_params=None): return tf.data.Dataset.from_generator( functools.partial(gen_dataset, split, shuffle_files, seed, dataset_params=dataset_params), output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_name) ) @utils.map_over_dataset def target_to_key(x, key_map, target_key): """Assign the value from the dataset to target_key in key_map""" return {**key_map, target_key: x} dataset_name = 'oscar-corpus/OSCAR-2109' subset= 'mr' dataset_params = {"path": dataset_name, "language":subset, "use_auth_token":True} dataset_shapes = None TaskRegistry.add( "oscar_marathi_corpus", source=seqio.FunctionDataSource( dataset_fn=functools.partial(dataset_fn, dataset_params=dataset_params), splits=("train", "validation"), caching_permitted=False, num_input_examples=dataset_shapes, ), preprocessors=[ functools.partial( target_to_key, key_map={ "targets": None, }, target_key="targets")], output_features={"targets": seqio.Feature(vocabulary=seqio.PassThroughVocabulary, add_eos=False, dtype=tf.string, rank=0)}, metric_fns=[] ) dataset = seqio.get_mixture_or_task("oscar_marathi_corpus").get_dataset( sequence_length=None, split="train", shuffle=True, num_epochs=1, shard_info=seqio.ShardInfo(index=0, num_shards=10), use_cached=False, seed=42 ) for _, ex in zip(range(5), dataset): print(ex['targets'].numpy().decode()) ``` ### Owner _No response_
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4417/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4417/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4413
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4413/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4413/comments
https://api.github.com/repos/huggingface/datasets/issues/4413/events
https://github.com/huggingface/datasets/issues/4413
1,250,259,822
I_kwDODunzps5KhXNu
4,413
Dataset Viewer issue for ett
{ "avatar_url": "https://avatars.githubusercontent.com/u/24966039?v=4", "events_url": "https://api.github.com/users/dgcnz/events{/privacy}", "followers_url": "https://api.github.com/users/dgcnz/followers", "following_url": "https://api.github.com/users/dgcnz/following{/other_user}", "gists_url": "https://api.github.com/users/dgcnz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dgcnz", "id": 24966039, "login": "dgcnz", "node_id": "MDQ6VXNlcjI0OTY2MDM5", "organizations_url": "https://api.github.com/users/dgcnz/orgs", "received_events_url": "https://api.github.com/users/dgcnz/received_events", "repos_url": "https://api.github.com/users/dgcnz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dgcnz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dgcnz/subscriptions", "type": "User", "url": "https://api.github.com/users/dgcnz", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url...
null
[ "Thanks for reporting @dgcnz.\r\n\r\nI have checked that the dataset works fine in streaming mode.\r\n\r\nAdditionally, other datasets containing timestamps are properly rendered by the viewer: https://huggingface.co/datasets/blbooks\r\n\r\nI have tried to force the refresh of the preview, but the endpoint is not r...
2022-05-27T02:12:35Z
2022-06-15T07:30:46Z
2022-06-15T07:30:46Z
NONE
null
null
null
null
### Link https://huggingface.co/datasets/ett ### Description Timestamp is not JSON serializable. ``` Status code: 500 Exception: Status500Error Message: Type is not JSON serializable: Timestamp ``` ### Owner No
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4413/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4413/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4407
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4407/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4407/comments
https://api.github.com/repos/huggingface/datasets/issues/4407/events
https://github.com/huggingface/datasets/issues/4407
1,248,671,778
I_kwDODunzps5KbTgi
4,407
Dataset Viewer issue for conll2012_ontonotesv5
{ "avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4", "events_url": "https://api.github.com/users/jiangwangyi/events{/privacy}", "followers_url": "https://api.github.com/users/jiangwangyi/followers", "following_url": "https://api.github.com/users/jiangwangyi/following{/other_user}", "gists_url": "https://api.github.com/users/jiangwangyi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jiangwangyi", "id": 39762734, "login": "jiangwangyi", "node_id": "MDQ6VXNlcjM5NzYyNzM0", "organizations_url": "https://api.github.com/users/jiangwangyi/orgs", "received_events_url": "https://api.github.com/users/jiangwangyi/received_events", "repos_url": "https://api.github.com/users/jiangwangyi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jiangwangyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiangwangyi/subscriptions", "type": "User", "url": "https://api.github.com/users/jiangwangyi", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url...
null
[ "Thanks for reporting, @jiangwy99.\r\n\r\nI guess this could be addressed only once we fix our issue with irresponsive backend endpoint.\r\n\r\nCC: @severo ", "I've just sent the forcing of the refresh of the preview to the new endpoint.", "Fixed, thanks for the patience. The issue was the amount of RAM allowed...
2022-05-25T20:18:33Z
2022-06-07T18:39:16Z
2022-06-07T18:39:16Z
NONE
null
null
null
null
### Link https://huggingface.co/datasets/conll2012_ontonotesv5 ### Description Dataset viewer outage. ### Owner No
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4407/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4407/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4405
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4405/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4405/comments
https://api.github.com/repos/huggingface/datasets/issues/4405/events
https://github.com/huggingface/datasets/issues/4405
1,248,574,087
I_kwDODunzps5Ka7qH
4,405
[TypeError: Couldn't cast array of type] Cannot process dataset in v2.2.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4", "events_url": "https://api.github.com/users/jiangwangyi/events{/privacy}", "followers_url": "https://api.github.com/users/jiangwangyi/followers", "following_url": "https://api.github.com/users/jiangwangyi/following{/other_user}", "gists_url": "https://api.github.com/users/jiangwangyi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jiangwangyi", "id": 39762734, "login": "jiangwangyi", "node_id": "MDQ6VXNlcjM5NzYyNzM0", "organizations_url": "https://api.github.com/users/jiangwangyi/orgs", "received_events_url": "https://api.github.com/users/jiangwangyi/received_events", "repos_url": "https://api.github.com/users/jiangwangyi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jiangwangyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiangwangyi/subscriptions", "type": "User", "url": "https://api.github.com/users/jiangwangyi", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "And if the problem is that the way I am to construct the {Entity Type: list of spans} makes entity types without any spans hard to handle, is there a better way to meet the demand? Although I have verified that to make entity types without any spans to behave like `entity_chunk[label] = [[\"\"]]` can perform norma...
2022-05-25T18:56:43Z
2022-06-07T14:27:20Z
2022-06-07T14:27:20Z
NONE
null
null
null
null
## Describe the bug I am trying to process the [conll2012_ontonotesv5](https://huggingface.co/datasets/conll2012_ontonotesv5) dataset in `datasets` v2.2.2 and am running into a type error when casting the features. ## Steps to reproduce the bug ```python import os from typing import ( List, Dict, ) from collections import ( defaultdict, ) from dataclasses import ( dataclass, ) from datasets import ( load_dataset, ) @dataclass class ConllConverter: path: str name: str cache_dir: str def __post_init__( self, ): self.dataset = load_dataset( path=self.path, name=self.name, cache_dir=self.cache_dir, ) def convert( self, ): class_label = self.dataset["train"].features["sentences"][0]["named_entities"].feature # label_set = list(set([ # label.split("-")[1] if label != "O" else label for label in class_label.names # ])) def prepare_chunk(token, entity): assert len(token) == len(entity) # Sequence length length = len(token) # Variable used entity_chunk = defaultdict(list) idx = flag = 0 # While loop while idx < length: if entity[idx] == "O": flag += 1 idx += 1 else: iob_tp, lab_tp = entity[idx].split("-") assert iob_tp == "B" idx += 1 while idx < length and entity[idx].startswith("I-"): idx += 1 entity_chunk[lab_tp].append(token[flag: idx]) flag = idx entity_chunk = dict(entity_chunk) # for label in label_set: # if label != "O" and label not in entity_chunk.keys(): # entity_chunk[label] = None return entity_chunk def prepare_features( batch: Dict[str, List], ) -> Dict[str, List]: sentence = [ sent for doc_sent in batch["sentences"] for sent in doc_sent ] feature = { "sentence": list(), } for sent in sentence: token = sent["words"] entity = class_label.int2str(sent["named_entities"]) entity_chunk = prepare_chunk(token, entity) sent_feat = { "token": token, "entity": entity, "entity_chunk": entity_chunk, } feature["sentence"].append(sent_feat) return feature column_names = self.dataset.column_names["train"] dataset = self.dataset.map( function=prepare_features, with_indices=False, batched=True, batch_size=3, remove_columns=column_names, num_proc=1, ) dataset.save_to_disk( dataset_dict_path=os.path.join("data", self.path, self.name) ) if __name__ == "__main__": converter = ConllConverter( path="conll2012_ontonotesv5", name="english_v4", cache_dir="cache", ) converter.convert() ``` ## Expected results I want to use the dataset to perform NER task and to change the label list into a {Entity Type: list of spans} format. ## Actual results <details> <summary>Traceback</summary> ```python Traceback (most recent call last): | 0/81 [00:00<?, ?ba/s] File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 532, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 499, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/fingerprint.py", line 458, in wrapper out = func(self, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2751, in _map_single writer.write_batch(batch) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 503, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 230, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 198, in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1675, in wrapper return func(array, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1793, in cast_array_to_feature arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1793, in <listcomp> arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1675, in wrapper return func(array, *args, **kwargs) File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1844, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") TypeError: Couldn't cast array of type struct<CARDINAL: list<item: list<item: string>>, DATE: list<item: list<item: string>>, EVENT: list<item: list<item: string>>, FAC: list<item: list<item: string>>, GPE: list<item: list<item: string>>, LANGUAGE: list<item: list<item: string>>, LAW: list<item: list<item: string>>, LOC: list<item: list<item: string>>, MONEY: list<item: list<item: string>>, NORP: list<item: list<item: string>>, ORDINAL: list<item: list<item: string>>, ORG: list<item: list<item: string>>, PERCENT: list<item: list<item: string>>, PERSON: list<item: list<item: string>>, QUANTITY: list<item: list<item: string>>, TIME: list<item: list<item: string>>, WORK_OF_ART: list<item: list<item: string>>> to {'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)} """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home2/jiangwangyi/workspace/work/Entity/dataconverter.py", line 110, in <module> converter.convert() File "/home2/jiangwangyi/workspace/work/Entity/dataconverter.py", line 91, in convert dataset = self.dataset.map( File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 770, in map { File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 771, in <dictcomp> k: dataset.map( File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2459, in map transformed_shards[index] = async_result.get() File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/multiprocess/pool.py", line 771, in get raise self._value TypeError: Couldn't cast array of type struct<CARDINAL: list<item: list<item: string>>, DATE: list<item: list<item: string>>, EVENT: list<item: list<item: string>>, FAC: list<item: list<item: string>>, GPE: list<item: list<item: string>>, LANGUAGE: list<item: list<item: string>>, LAW: list<item: list<item: string>>, LOC: list<item: list<item: string>>, MONEY: list<item: list<item: string>>, NORP: list<item: list<item: string>>, ORDINAL: list<item: list<item: string>>, ORG: list<item: list<item: string>>, PERCENT: list<item: list<item: string>>, PERSON: list<item: list<item: string>>, QUANTITY: list<item: list<item: string>>, TIME: list<item: list<item: string>>, WORK_OF_ART: list<item: list<item: string>>> to {'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)} ``` </details> ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2 - Platform: Ubuntu 18.04 - Python version: 3.9.7 - PyArrow version: 7.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4", "events_url": "https://api.github.com/users/jiangwangyi/events{/privacy}", "followers_url": "https://api.github.com/users/jiangwangyi/followers", "following_url": "https://api.github.com/users/jiangwangyi/following{/other_user}", "gists_url": "https://api.github.com/users/jiangwangyi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jiangwangyi", "id": 39762734, "login": "jiangwangyi", "node_id": "MDQ6VXNlcjM5NzYyNzM0", "organizations_url": "https://api.github.com/users/jiangwangyi/orgs", "received_events_url": "https://api.github.com/users/jiangwangyi/received_events", "repos_url": "https://api.github.com/users/jiangwangyi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jiangwangyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiangwangyi/subscriptions", "type": "User", "url": "https://api.github.com/users/jiangwangyi", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4405/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4405/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4404
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4404/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4404/comments
https://api.github.com/repos/huggingface/datasets/issues/4404/events
https://github.com/huggingface/datasets/issues/4404
1,248,572,899
I_kwDODunzps5Ka7Xj
4,404
Dataset should have a `.name` field
{ "avatar_url": "https://avatars.githubusercontent.com/u/36440?v=4", "events_url": "https://api.github.com/users/f4hy/events{/privacy}", "followers_url": "https://api.github.com/users/f4hy/followers", "following_url": "https://api.github.com/users/f4hy/following{/other_user}", "gists_url": "https://api.github.com/users/f4hy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/f4hy", "id": 36440, "login": "f4hy", "node_id": "MDQ6VXNlcjM2NDQw", "organizations_url": "https://api.github.com/users/f4hy/orgs", "received_events_url": "https://api.github.com/users/f4hy/received_events", "repos_url": "https://api.github.com/users/f4hy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/f4hy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/f4hy/subscriptions", "type": "User", "url": "https://api.github.com/users/f4hy", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi! You can already use `dset.builder_name` and `dset.config_name` for that purpose. And when it comes to versioning, it's better to use `dset._fingerprint` than the `version` attribute as the former represents a deterministic hash that encodes all the mutable ops executed on a dataset, and the latter stays the sa...
2022-05-25T18:56:08Z
2022-09-13T15:09:30Z
2022-06-16T10:47:53Z
NONE
null
null
null
null
**Is your feature request related to a problem? Please describe.** If building pipelines that can evaluate on more than one dataset, it would be nice to be able to log results of things like `Evaluating on {dataset.name}` or `results for {dataset.name} are: {results}` Without some way of concisely identifying a dataset from the dataset object, tools which might run on more than one dataset must be passed the dataset object _and_ the name/id of the dataset being used. **Describe the solution you'd like** The DatasetInfo class should have a `name` field which is the name of a dataset. then for a given dataset if it evolves in time the `version` can be updated but its different versions of the same dataset with a unique `name`. The name could then all be accessed by `dataset.name` **Describe alternatives you've considered** For my own purposes I am considering making `NamedDataset[Dataset]` where the subclass just has a .name field. **Additional context** My guess is that most usecases are not working with more than one dataset in a given pipeline so a name is not really needed. This has surprised me though as one of the advantages of a standard dataset interface is to be able to build pipelines which can be passed in a dataset and separate responsibilities of the dataset loading from the train or eval pipeline.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4404/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4404/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4401
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4401/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4401/comments
https://api.github.com/repos/huggingface/datasets/issues/4401/events
https://github.com/huggingface/datasets/issues/4401
1,247,695,921
I_kwDODunzps5KXlQx
4,401
"NonMatchingChecksumError" when importing 'spider' dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/81417777?v=4", "events_url": "https://api.github.com/users/OmarAlaaeldein/events{/privacy}", "followers_url": "https://api.github.com/users/OmarAlaaeldein/followers", "following_url": "https://api.github.com/users/OmarAlaaeldein/following{/other_user}", "gists_url": "https://api.github.com/users/OmarAlaaeldein/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/OmarAlaaeldein", "id": 81417777, "login": "OmarAlaaeldein", "node_id": "MDQ6VXNlcjgxNDE3Nzc3", "organizations_url": "https://api.github.com/users/OmarAlaaeldein/orgs", "received_events_url": "https://api.github.com/users/OmarAlaaeldein/received_events", "repos_url": "https://api.github.com/users/OmarAlaaeldein/repos", "site_admin": false, "starred_url": "https://api.github.com/users/OmarAlaaeldein/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OmarAlaaeldein/subscriptions", "type": "User", "url": "https://api.github.com/users/OmarAlaaeldein", "user_view_type": "public" }
[ { "color": "8B51EF", "default": false, "description": "", "id": 4069435429, "name": "hosted-on-google-drive", "node_id": "LA_kwDODunzps7yjqgl", "url": "https://api.github.com/repos/huggingface/datasets/labels/hosted-on-google-drive" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Thanks for reporting, @OmarAlaaeldein.\r\n\r\nDatasets hosted at Google Drive give problems quite often due to a change in their service:\r\n- #3786 \r\n\r\nRelated to:\r\n- #3906\r\n\r\nI'm having a look.", "We have made a Pull Request to replace the Google Drive URL. This fix will be accessible in our next `da...
2022-05-25T07:45:07Z
2022-05-26T06:40:12Z
2022-05-26T06:40:12Z
NONE
null
null
null
null
## Describe the bug When importing 'spider' dataset [https://huggingface.co/datasets/spider] an error occurs ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('spider') ``` ## Expected results Dataset object ## Actual results NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0'] ## Environment info - `datasets` version: 2.2.2 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.11 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4401/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4401/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4400
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4400/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4400/comments
https://api.github.com/repos/huggingface/datasets/issues/4400/events
https://github.com/huggingface/datasets/issues/4400
1,247,404,237
I_kwDODunzps5KWeDN
4,400
load dataset wikitext-2-raw-v1 failed. Could not reach wikitext-2-raw-v1.py.
{ "avatar_url": "https://avatars.githubusercontent.com/u/20658907?v=4", "events_url": "https://api.github.com/users/cailun01/events{/privacy}", "followers_url": "https://api.github.com/users/cailun01/followers", "following_url": "https://api.github.com/users/cailun01/following{/other_user}", "gists_url": "https://api.github.com/users/cailun01/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cailun01", "id": 20658907, "login": "cailun01", "node_id": "MDQ6VXNlcjIwNjU4OTA3", "organizations_url": "https://api.github.com/users/cailun01/orgs", "received_events_url": "https://api.github.com/users/cailun01/received_events", "repos_url": "https://api.github.com/users/cailun01/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cailun01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cailun01/subscriptions", "type": "User", "url": "https://api.github.com/users/cailun01", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "I tried in this way.\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(path=\"wikitext\", name=\"wikitext-103-v1\", split=\"train\")\r\n```" ]
2022-05-25T03:10:44Z
2022-10-24T06:10:27Z
2022-05-25T03:26:36Z
NONE
null
null
null
null
## Describe the bug Could not reach wikitext-2-raw-v1.py ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("wikitext-2-raw-v1") ``` ## Expected results Download `wikitext-2-raw-v1` dataset successfully. ## Actual results ``` File "load_datasets.py", line 13, in <module> load_dataset("wikitext-2-raw-v1") File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1715, in load_dataset **config_kwargs, File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1536, in load_dataset_builder data_files=data_files, File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1282, in dataset_module_factory raise e1 from None File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1224, in dataset_module_factory dynamic_modules_path=dynamic_modules_path, File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 559, in get_module local_path = self.download_loading_script(revision) File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 539, in download_loading_script return cached_path(file_path, download_config=download_config) File "/root/miniconda3/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 246, in cached_path download_desc=download_config.download_desc, File "/root/miniconda3/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 582, in get_from_cache raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})") ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.2.2/datasets/wikitext-2-raw-v1/wikitext-2-raw-v1.py (ReadTimeout(ReadTimeoutError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=100)",),)) ``` I tried to download wikitext-2-raw-v1.py by chrome and got: ![image](https://user-images.githubusercontent.com/20658907/170171595-0ca9f1da-c05a-4b57-861e-9530bfa3bdb9.png) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2 - Platform: CentOS 7 - Python version: 3.6 - PyArrow version: 3.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/20658907?v=4", "events_url": "https://api.github.com/users/cailun01/events{/privacy}", "followers_url": "https://api.github.com/users/cailun01/followers", "following_url": "https://api.github.com/users/cailun01/following{/other_user}", "gists_url": "https://api.github.com/users/cailun01/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cailun01", "id": 20658907, "login": "cailun01", "node_id": "MDQ6VXNlcjIwNjU4OTA3", "organizations_url": "https://api.github.com/users/cailun01/orgs", "received_events_url": "https://api.github.com/users/cailun01/received_events", "repos_url": "https://api.github.com/users/cailun01/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cailun01/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cailun01/subscriptions", "type": "User", "url": "https://api.github.com/users/cailun01", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4400/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4400/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4399
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4399/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4399/comments
https://api.github.com/repos/huggingface/datasets/issues/4399/events
https://github.com/huggingface/datasets/issues/4399
1,246,948,299
I_kwDODunzps5KUuvL
4,399
LocalDatasetModuleFactoryWithoutScript extracts invalid builder name
{ "avatar_url": "https://avatars.githubusercontent.com/u/40543?v=4", "events_url": "https://api.github.com/users/apohllo/events{/privacy}", "followers_url": "https://api.github.com/users/apohllo/followers", "following_url": "https://api.github.com/users/apohllo/following{/other_user}", "gists_url": "https://api.github.com/users/apohllo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/apohllo", "id": 40543, "login": "apohllo", "node_id": "MDQ6VXNlcjQwNTQz", "organizations_url": "https://api.github.com/users/apohllo/orgs", "received_events_url": "https://api.github.com/users/apohllo/received_events", "repos_url": "https://api.github.com/users/apohllo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/apohllo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apohllo/subscriptions", "type": "User", "url": "https://api.github.com/users/apohllo", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "7057ff", "default": true, "descript...
closed
false
null
[]
null
[ "Ok, so\r\n```\r\nos.path.basename(\"/home/user/\")\r\n```\r\ngives `''` while \r\n```\r\nos.path.basename(\"/home/user\")\r\n```\r\ngives `user`. \r\nThe code should check if the last char is a slash.\r\n", "The fix is:\r\n```\r\n\"name\": os.path.basename(self.path[:-1] if self.path[-1] == \"/\" else self.path)...
2022-05-24T18:03:01Z
2022-09-12T15:30:43Z
2022-09-12T15:30:43Z
CONTRIBUTOR
null
null
null
null
## Describe the bug Trying to load a local dataset raises an error indicating that the config builder has to have a name. No error should be reported, since the call is completly valid. ## Steps to reproduce the bug ```python load_dataset("./data/some-dataset/", name="some-name") ``` ## Expected results The dataset should be loaded. ## Actual results ``` Traceback (most recent call last): File "train_lquad.py", line 19, in <module> load(tokenize_target_function, tokenize_target_function, {}, tokenizer) File "train_lquad.py", line 14, in load dataset = load_dataset("./data/lquad/", name="lquad") File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/load.py", line 1708, in load_dataset builder_instance = load_dataset_builder( File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/load.py", line 1560, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/builder.py", line 269, in __init__ self.config, self.config_id = self._create_builder_config( File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/builder.py", line 403, in _create_builder_config raise ValueError(f"BuilderConfig must have a name, got {builder_config.name}") ValueError: BuilderConfig must have a name, got ``` ## Environment info - `datasets` version: 2.2.2 - Platform: Linux-4.18.0-348.20.1.el8_5.x86_64-x86_64-with-glibc2.2.5 - Python version: 3.8.6 - PyArrow version: 8.0.0 - Pandas version: 1.4.2 The error is probably in line 795 in load.py: ``` builder_kwargs = { "hash": hash, "data_files": data_files, "name": os.path.basename(self.path), "base_path": self.path, **builder_kwargs, } ``` `os.path.basename` for a directory returns an empty string, rather than the name of the directory.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4399/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4399/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4398
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4398/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4398/comments
https://api.github.com/repos/huggingface/datasets/issues/4398/events
https://github.com/huggingface/datasets/issues/4398
1,246,666,749
I_kwDODunzps5KTp_9
4,398
Calling `cast_column`/`remove_columns` and a sequence of `map` operations ends up making `faiss` fail with `ValueError`
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "It works if we either remove the `ds = ds.cast_column(\"id\", Value(\"int32\"))` line from the code above, or if instead calling `ds.remove_columns()` we remove the columns inside each mapping as `ds.map(..., remove_columns=[...])` instead of right after the mapping.\r\n\r\nBoth of those solutions seem to fix the ...
2022-05-24T14:41:34Z
2022-06-14T16:01:56Z
2022-06-14T16:01:56Z
MEMBER
null
null
null
null
First of all, sorry in advance for the unclear title, but this bug is weird to explain (at least for me), so I tried my best to summarize all the information in this issue. ## Describe the bug Calling a certain combination of operations over a 🤗 `Dataset` and then trying to calculate the `faiss` index with `.add_faiss_index` ends up throwing an exception while trying to set the format back of a previously removed column. But this just happens over certain conditions... I'll present some scenarios below! ## Steps to reproduce the bug Assuming the following dataset named `sample.csv` with some IMDb data: ```csv id,title,summary 1877830,"The Batman","When a sadistic serial killer begins murdering key political figures in Gotham, Batman is forced to investigate the city's hidden corruption and question his family's involvement." 9419884,"Doctor Strange in the Multiverse of Madness","Doctor Strange teams up with a mysterious teenage girl from his dreams who can travel across multiverses, to battle multiple threats, including other-universe versions of himself, which threaten to wipe out millions across the multiverse. They seek help from Wanda the Scarlet Witch, Wong and others." 11138512,"The Northman","From visionary director Robert Eggers comes The Northman, an action-filled epic that follows a young Viking prince on his quest to avenge his father's murder." 1745960,"Top Gun: Maverick","After more than thirty years of service as one of the Navy's top aviators, Pete Mitchell is where he belongs, pushing the envelope as a courageous test pilot and dodging the advancement in rank that would ground him." ``` We'll be able to reproduce the bug using the following piece of code: ```python # Sample code to reproduce the bug from transformers import DPRContextEncoder, DPRContextEncoderTokenizer import torch torch.set_grad_enabled(False) ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") from datasets import load_dataset, Value ds = load_dataset("csv", data_files=["sample.csv"], split="train") ds = ds.cast_column("id", Value("int32")) # from `int64` to `int32` ds = ds.map(lambda x: {"inputs": f"{ctx_tokenizer.sep_token}".join(["title", "summary"])}) ds = ds.remove_columns(["title", "summary"]) def generate_embeddings(x): return {"embeddings": ctx_encoder(**ctx_tokenizer(x["inputs"], return_tensors="pt"))[0][0].numpy()} ds = ds.map(generate_embeddings) ds = ds.remove_columns("inputs") ds.add_faiss_index(column="embeddings") # It fails here! ``` The code above is an adaptation of https://huggingface.co/docs/datasets/faiss_es, for the sake of presenting the bug with a simple example. ## Expected results Ideally, the `faiss` index should be calculated over the 🤗 `Dataset` and no exception should be triggered. ## Actual results But what happens instead is that a `ValueError: Columns ['inputs'] not in the dataset. Current columns in the dataset: ['id', 'embeddings']`, which makes no sense as that column has been previously dropped. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2 - Platform: Linux-5.4.0-1074-azure-x86_64-with-glibc2.31 - Python version: 3.9.5 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4398/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4398/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4394
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4394/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4394/comments
https://api.github.com/repos/huggingface/datasets/issues/4394/events
https://github.com/huggingface/datasets/issues/4394
1,245,221,657
I_kwDODunzps5KOJMZ
4,394
trainer became extremely slow after reload dataset by `load_from_disk`
{ "avatar_url": "https://avatars.githubusercontent.com/u/50416856?v=4", "events_url": "https://api.github.com/users/conan1024hao/events{/privacy}", "followers_url": "https://api.github.com/users/conan1024hao/followers", "following_url": "https://api.github.com/users/conan1024hao/following{/other_user}", "gists_url": "https://api.github.com/users/conan1024hao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/conan1024hao", "id": 50416856, "login": "conan1024hao", "node_id": "MDQ6VXNlcjUwNDE2ODU2", "organizations_url": "https://api.github.com/users/conan1024hao/orgs", "received_events_url": "https://api.github.com/users/conan1024hao/received_events", "repos_url": "https://api.github.com/users/conan1024hao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/conan1024hao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/conan1024hao/subscriptions", "type": "User", "url": "https://api.github.com/users/conan1024hao", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "I tried to make the dataset much more smaller (100000 rows) , then the speed became `33.88it/s` from`8.62s/it`. It's nearly 200 times... Do you have any idea? Thank you!", "Similar issue: https://github.com/huggingface/transformers/issues/8818\r\n\r\nI changed `RandomSampler` to `SequentialSampler` in the `tra...
2022-05-23T14:04:37Z
2023-11-23T07:40:30Z
null
NONE
null
null
null
null
## Describe the bug Due to memory problem, I need to save my tokenized datasets locally by CPU and reload it by multi GPU for running training script. However, after I reload it by `load_from_disk` and start training, the speed is extremely slow. It says I need about 1500 hours with 8 A100 cards. Before this, I can run the whole script in one day with a single A100 card. Since I am try to pre-train a BERT, **my dataset is very large(29058165 rows)** ## Steps to reproduce the bug ```python tokenized_datasets.save_to_disk( "/pathto/dataset" ) tokenized_datasets = load_from_disk( "/pathto/dataset" ) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"] if training_args.do_train else None, eval_dataset=tokenized_datasets["validation"] if training_args.do_eval else None, tokenizer=tokenizer, data_collator=data_collator, ) train_result = trainer.train(resume_from_checkpoint=checkpoint) ``` ## Expected results Without the save and reload process, I only need about one day to run the whole script with one A100 card. ## Actual results ``` [INFO|trainer.py:1290] 2022-05-23 22:49:46,266 >> ***** Running training ***** [INFO|trainer.py:1291] 2022-05-23 22:49:46,266 >> Num examples = 29058165 [INFO|trainer.py:1292] 2022-05-23 22:49:46,266 >> Num Epochs = 5 [INFO|trainer.py:1293] 2022-05-23 22:49:46,266 >> Instantaneous batch size per device = 16 [INFO|trainer.py:1294] 2022-05-23 22:49:46,266 >> Total train batch size (w. parallel, distributed & accumulation) = 256 [INFO|trainer.py:1295] 2022-05-23 22:49:46,266 >> Gradient Accumulation steps = 2 [INFO|trainer.py:1296] 2022-05-23 22:49:46,266 >> Total optimization steps = 567540 0%| | 1/567540 [00:09<1544:49:04, 9.80s/it] 0%| | 2/567540 [00:17<1320:00:17, 8.37s/it] 0%| | 3/567540 [00:26<1393:10:17, 8.84s/it] 0%| | 4/567540 [00:34<1344:56:33, 8.53s/it] 0%| | 5/567540 [00:43<1359:36:12, 8.62s/it] ``` ## Environment info ``` torch 1.11.0+cu113 torchaudio 0.11.0+cu113 torchvision 0.12.0+cu113 transformers 4.18.0 datasets 2.2.2 ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4394/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4394/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4387
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4387/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4387/comments
https://api.github.com/repos/huggingface/datasets/issues/4387/events
https://github.com/huggingface/datasets/issues/4387
1,244,147,817
I_kwDODunzps5KKDBp
4,387
device/google/accessory/adk2012 - Git at Google
{ "avatar_url": "https://avatars.githubusercontent.com/u/87345839?v=4", "events_url": "https://api.github.com/users/Aeckard45/events{/privacy}", "followers_url": "https://api.github.com/users/Aeckard45/followers", "following_url": "https://api.github.com/users/Aeckard45/following{/other_user}", "gists_url": "https://api.github.com/users/Aeckard45/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Aeckard45", "id": 87345839, "login": "Aeckard45", "node_id": "MDQ6VXNlcjg3MzQ1ODM5", "organizations_url": "https://api.github.com/users/Aeckard45/orgs", "received_events_url": "https://api.github.com/users/Aeckard45/received_events", "repos_url": "https://api.github.com/users/Aeckard45/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Aeckard45/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aeckard45/subscriptions", "type": "User", "url": "https://api.github.com/users/Aeckard45", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2022-05-22T04:57:19Z
2022-05-23T06:36:27Z
2022-05-23T06:36:27Z
NONE
null
null
null
null
"git clone https://android.googlesource.com/device/google/accessory/adk2012" https://android.googlesource.com/device/google/accessory/adk2012/#:~:text=git%20clone%20https%3A//android.googlesource.com/device/google/accessory/adk2012
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4387/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4387/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4386
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4386/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4386/comments
https://api.github.com/repos/huggingface/datasets/issues/4386/events
https://github.com/huggingface/datasets/issues/4386
1,243,965,532
I_kwDODunzps5KJWhc
4,386
Bug for wiki_auto_asset_turk from GEM
{ "avatar_url": "https://avatars.githubusercontent.com/u/37647985?v=4", "events_url": "https://api.github.com/users/StevenTang1998/events{/privacy}", "followers_url": "https://api.github.com/users/StevenTang1998/followers", "following_url": "https://api.github.com/users/StevenTang1998/following{/other_user}", "gists_url": "https://api.github.com/users/StevenTang1998/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/StevenTang1998", "id": 37647985, "login": "StevenTang1998", "node_id": "MDQ6VXNlcjM3NjQ3OTg1", "organizations_url": "https://api.github.com/users/StevenTang1998/orgs", "received_events_url": "https://api.github.com/users/StevenTang1998/received_events", "repos_url": "https://api.github.com/users/StevenTang1998/repos", "site_admin": false, "starred_url": "https://api.github.com/users/StevenTang1998/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StevenTang1998/subscriptions", "type": "User", "url": "https://api.github.com/users/StevenTang1998", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Thanks for reporting, @StevenTang1998.\r\n\r\nI'm looking into it. ", "Hi @StevenTang1998,\r\n\r\nWe have fixed the issue:\r\n- #4389\r\n\r\nThe fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by installing `datasets` from our GitHub repo:\r\n```\r\npip...
2022-05-21T12:31:30Z
2022-05-24T05:55:52Z
2022-05-23T10:29:55Z
NONE
null
null
null
null
## Describe the bug The script of wiki_auto_asset_turk for GEM may be out of date. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('gem', 'wiki_auto_asset_turk') ``` ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/load.py", line 1731, in load_dataset builder_instance.download_and_prepare( File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 640, in download_and_prepare self._download_and_prepare( File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 1158, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 707, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/tangtianyi/.cache/huggingface/modules/datasets_modules/datasets/gem/982a54473b12c6a6e40d4356e025fb7172a5bb2065e655e2c1af51f2b3cf4ca1/gem.py", line 538, in _split_generators dl_dir = dl_manager.download_and_extract(_URLs[self.config.name]) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 416, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 294, in download downloaded_path_or_paths = map_nested( File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 351, in map_nested mapped = [ File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 352, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 288, in _single_map_nested return function(data_struct) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 320, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 234, in cached_path output_path = get_from_cache( File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 579, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.orig ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4386/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4386/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4383
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4383/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4383/comments
https://api.github.com/repos/huggingface/datasets/issues/4383/events
https://github.com/huggingface/datasets/issues/4383
1,243,856,981
I_kwDODunzps5KI8BV
4,383
L
{ "avatar_url": "https://avatars.githubusercontent.com/u/99847861?v=4", "events_url": "https://api.github.com/users/AronCodes21/events{/privacy}", "followers_url": "https://api.github.com/users/AronCodes21/followers", "following_url": "https://api.github.com/users/AronCodes21/following{/other_user}", "gists_url": "https://api.github.com/users/AronCodes21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AronCodes21", "id": 99847861, "login": "AronCodes21", "node_id": "U_kgDOBfOOtQ", "organizations_url": "https://api.github.com/users/AronCodes21/orgs", "received_events_url": "https://api.github.com/users/AronCodes21/received_events", "repos_url": "https://api.github.com/users/AronCodes21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AronCodes21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AronCodes21/subscriptions", "type": "User", "url": "https://api.github.com/users/AronCodes21", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2022-05-21T03:47:58Z
2022-05-21T19:20:13Z
2022-05-21T19:20:13Z
NONE
null
null
null
null
## Describe the L L ## Expected L A clear and concise lmll Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4383/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4383/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4382
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4382/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4382/comments
https://api.github.com/repos/huggingface/datasets/issues/4382/events
https://github.com/huggingface/datasets/issues/4382
1,243,839,783
I_kwDODunzps5KI30n
4,382
First time trying
{ "avatar_url": "https://avatars.githubusercontent.com/u/87345839?v=4", "events_url": "https://api.github.com/users/Aeckard45/events{/privacy}", "followers_url": "https://api.github.com/users/Aeckard45/followers", "following_url": "https://api.github.com/users/Aeckard45/following{/other_user}", "gists_url": "https://api.github.com/users/Aeckard45/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Aeckard45", "id": 87345839, "login": "Aeckard45", "node_id": "MDQ6VXNlcjg3MzQ1ODM5", "organizations_url": "https://api.github.com/users/Aeckard45/orgs", "received_events_url": "https://api.github.com/users/Aeckard45/received_events", "repos_url": "https://api.github.com/users/Aeckard45/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Aeckard45/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aeckard45/subscriptions", "type": "User", "url": "https://api.github.com/users/Aeckard45", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2022-05-21T02:15:18Z
2022-05-21T19:20:44Z
2022-05-21T19:20:44Z
NONE
null
null
null
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4382/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4382/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4381
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4381/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4381/comments
https://api.github.com/repos/huggingface/datasets/issues/4381/events
https://github.com/huggingface/datasets/issues/4381
1,243,478,863
I_kwDODunzps5KHftP
4,381
Bug in caching 2 datasets both with the same builder class name
{ "avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4", "events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}", "followers_url": "https://api.github.com/users/NouamaneTazi/followers", "following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}", "gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NouamaneTazi", "id": 29777165, "login": "NouamaneTazi", "node_id": "MDQ6VXNlcjI5Nzc3MTY1", "organizations_url": "https://api.github.com/users/NouamaneTazi/orgs", "received_events_url": "https://api.github.com/users/NouamaneTazi/received_events", "repos_url": "https://api.github.com/users/NouamaneTazi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions", "type": "User", "url": "https://api.github.com/users/NouamaneTazi", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Hi @NouamaneTazi, thanks for reporting.\r\n\r\nPlease note that both datasets are cached in the same directory because their loading builder classes have the same name: `class MTOP(datasets.GeneratorBasedBuilder)`.\r\n\r\nYou should name their builder classes differently, e.g.:\r\n- `MtopDomain`\r\n- `MtopIntent`"...
2022-05-20T18:18:03Z
2022-06-02T08:18:37Z
2022-05-25T05:16:15Z
MEMBER
null
null
null
null
## Describe the bug The two datasets `mteb/mtop_intent` and `mteb/mtop_domain `use both the same cache folder `.cache/huggingface/datasets/mteb___mtop`. So if you first load `mteb/mtop_intent` then datasets will not load `mteb/mtop_domain`. If you delete this cache folder and flip the order how you load the two datasets , you will get the opposite datasets loaded (difference is here in terms of the label and label_text). ## Steps to reproduce the bug ```python import datasets dataset = datasets.load_dataset("mteb/mtop_intent", "en") print(dataset['train'][0]) dataset = datasets.load_dataset("mteb/mtop_domain", "en") print(dataset['train'][0]) ``` ## Expected results ``` Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop_intent/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 920.14it/s] {'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'} Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop_domain/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35) 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1307.59it/s] {'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 0, 'label_text': 'messaging'} ``` ## Actual results ``` Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 920.14it/s] {'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'} Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35) 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1307.59it/s] {'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.1 - Platform: macOS-12.1-arm64-arm-64bit - Python version: 3.9.12 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4381/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4381/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4379
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4379/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4379/comments
https://api.github.com/repos/huggingface/datasets/issues/4379/events
https://github.com/huggingface/datasets/issues/4379
1,243,175,854
I_kwDODunzps5KGVuu
4,379
Latest dill release raises exception
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Fixed by:\r\n- #4380 ", "Just an additional insight, the latest dill (either 0.3.5 or 0.3.5.1) also broke the hashing/fingerprinting of any mapping function.\r\n\r\nFor example:\r\n```\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"rotten_tomatoes\")\r\nd.map(lambda x: x)\r\n```\r\n\r\nReturns th...
2022-05-20T13:48:36Z
2022-05-21T15:53:26Z
2022-05-20T17:06:27Z
MEMBER
null
null
null
null
## Describe the bug As reported by @sgugger, latest dill release is breaking things with Datasets. ``` ______________ ExamplesTests.test_run_speech_recognition_seq2seq _______________ self = <multiprocess.pool.ApplyResult object at 0x7fa5981a1cd0>, timeout = None def get(self, timeout=None): self.wait(timeout) if not self.ready(): raise TimeoutError if self._success: return self._value else: > raise self._value E TypeError: '>' not supported between instances of 'NoneType' and 'float' ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4379/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4379/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4376
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4376/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4376/comments
https://api.github.com/repos/huggingface/datasets/issues/4376/events
https://github.com/huggingface/datasets/issues/4376
1,242,218,144
I_kwDODunzps5KCr6g
4,376
irc_disentagle viewer error
{ "avatar_url": "https://avatars.githubusercontent.com/u/25671683?v=4", "events_url": "https://api.github.com/users/labouz/events{/privacy}", "followers_url": "https://api.github.com/users/labouz/followers", "following_url": "https://api.github.com/users/labouz/following{/other_user}", "gists_url": "https://api.github.com/users/labouz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/labouz", "id": 25671683, "login": "labouz", "node_id": "MDQ6VXNlcjI1NjcxNjgz", "organizations_url": "https://api.github.com/users/labouz/orgs", "received_events_url": "https://api.github.com/users/labouz/received_events", "repos_url": "https://api.github.com/users/labouz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/labouz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/labouz/subscriptions", "type": "User", "url": "https://api.github.com/users/labouz", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "DUPLICATED comment from https://github.com/huggingface/datasets/issues/3807:\r\n\r\nmy code:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"irc_disentangle\", download_mode=\"force_redownload\")\r\n```\r\nhowever, it produces the same error\r\n```\r\n[38](file:///Library/Frameworks/Pyt...
2022-05-19T19:15:16Z
2023-01-12T16:56:13Z
2022-06-02T08:20:00Z
NONE
null
null
null
null
the dataviewer shows this message for "ubuntu" - "train", "test", and "validation" splits: ``` Server error Status code: 400 Exception: ValueError Message: Cannot seek streaming HTTP file ``` it appears to give the same message for the "channel_two" data as well. I get a Checksums error when using `load_data()` with this dataset. Even with the `download_mode` and `ignore_verifications` options set. i referenced the issue here: https://github.com/huggingface/datasets/issues/3807
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4376/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4376/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4374
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4374/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4374/comments
https://api.github.com/repos/huggingface/datasets/issues/4374/events
https://github.com/huggingface/datasets/issues/4374
1,241,860,535
I_kwDODunzps5KBUm3
4,374
extremely slow processing when using a custom dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/32235549?v=4", "events_url": "https://api.github.com/users/StephennFernandes/events{/privacy}", "followers_url": "https://api.github.com/users/StephennFernandes/followers", "following_url": "https://api.github.com/users/StephennFernandes/following{/other_user}", "gists_url": "https://api.github.com/users/StephennFernandes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/StephennFernandes", "id": 32235549, "login": "StephennFernandes", "node_id": "MDQ6VXNlcjMyMjM1NTQ5", "organizations_url": "https://api.github.com/users/StephennFernandes/orgs", "received_events_url": "https://api.github.com/users/StephennFernandes/received_events", "repos_url": "https://api.github.com/users/StephennFernandes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/StephennFernandes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StephennFernandes/subscriptions", "type": "User", "url": "https://api.github.com/users/StephennFernandes", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "d876e3", "default": true, "descript...
closed
false
null
[]
null
[ "Hi !\r\n\r\nMy guess is that some examples in your dataset are bigger than your RAM, and therefore loading them in RAM to pass them to `remove_non_indic_sentences` takes forever because it might use SWAP memory.\r\n\r\nMaybe several examples in your dataset are grouped together, can you check `len(lang_dataset[\"t...
2022-05-19T14:18:05Z
2023-07-25T15:07:17Z
2023-07-25T15:07:16Z
NONE
null
null
null
null
## processing a custom dataset loaded as .txt file is extremely slow, compared to a dataset of similar volume from the hub I have a large .txt file of 22 GB which i load into HF dataset `lang_dataset = datasets.load_dataset("text", data_files="hi.txt")` further i use a pre-processing function to clean the dataset `lang_dataset["train"] = lang_dataset["train"].map( remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)` the following processing takes astronomical time to process, while hoging all the ram. similar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data. `lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", use_auth_token=True)` the hours predicted to preprocess are as follows: huggingface hub dataset: 6.5 hrs custom loaded dataset: 7000 hrs note: both the datasets are almost actually same, just provided by different sources with has +/- some samples, only one is hosted on the HF hub and the other is downloaded in a text format. ## Steps to reproduce the bug ``` import datasets import psutil import sys import glob from fastcore.utils import listify import re import gc def remove_non_indic_sentences(example): tmp_ls = [] eng_regex = r'[. a-zA-Z0-9ÖÄÅöäå _.,!"\'\/$]*' for e in listify(example['text']): matches = re.findall(eng_regex, e) for match in (str(match).strip() for match in matches if match not in [""," ", " ", ",", " ,", ", ", " , "]): if len(list(match.split(" "))) > 2: e = re.sub(match," ",e,count=1) tmp_ls.append(e) gc.collect() example['clean_text'] = tmp_ls return example lang_dataset = datasets.load_dataset("text", data_files="hi.txt") lang_dataset["train"] = lang_dataset["train"].map( remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64) ## same thing work much faster when loading similar dataset from hub lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", split="train", use_auth_token=True) lang_dataset["train"] = lang_dataset["train"].map( remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64) ``` ## Actual results similar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data. `lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", use_auth_token=True) **the hours predicted to preprocess are as follows:** huggingface hub dataset: 6.5 hrs custom loaded dataset: 7000 hrs **i even tried the following:** - sharding the large 22gb text files into smaller files and loading - saving the file to disk and then loading - using lesser num_proc - using smaller batch size - processing without batches ie : without `batched=True` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2.dev0 - Platform: Ubuntu 20.04 LTS - Python version: 3.9.7 - PyArrow version:8.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4374/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4374/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4366
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4366/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4366/comments
https://api.github.com/repos/huggingface/datasets/issues/4366/events
https://github.com/huggingface/datasets/issues/4366
1,239,534,165
I_kwDODunzps5J4cpV
4,366
TypeError: __init__() missing 1 required positional argument: 'scheme'
{ "avatar_url": "https://avatars.githubusercontent.com/u/99231535?v=4", "events_url": "https://api.github.com/users/jffgitt/events{/privacy}", "followers_url": "https://api.github.com/users/jffgitt/followers", "following_url": "https://api.github.com/users/jffgitt/following{/other_user}", "gists_url": "https://api.github.com/users/jffgitt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jffgitt", "id": 99231535, "login": "jffgitt", "node_id": "U_kgDOBeonLw", "organizations_url": "https://api.github.com/users/jffgitt/orgs", "received_events_url": "https://api.github.com/users/jffgitt/received_events", "repos_url": "https://api.github.com/users/jffgitt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jffgitt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jffgitt/subscriptions", "type": "User", "url": "https://api.github.com/users/jffgitt", "user_view_type": "public" }
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" } ]
closed
false
null
[]
null
[ "Duplicate of:\r\n- #3956\r\n\r\nI think you should report that issue to `elasticsearch` library: https://github.com/elastic/elasticsearch-py" ]
2022-05-18T07:17:29Z
2022-05-18T16:36:22Z
2022-05-18T16:36:21Z
NONE
null
null
null
null
"name" : "node-1", "cluster_name" : "elasticsearch", "cluster_uuid" : "", "version" : { "number" : "7.5.0", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "", "build_date" : "2019-11-26T01:06:52.518245Z", "build_snapshot" : false, "lucene_version" : "8.3.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" when I run the order: nohup python3 custom_service.pyc > service.log 2>&1& the log: nohup: 忽略输入 Traceback (most recent call last): File "/home/xfz/p3_custom_test/custom_service.py", line 55, in <module> File "/home/xfz/p3_custom_test/custom_service.py", line 48, in doInitialize File "custom_impl.py", line 286, in custom_setup File "custom_impl.py", line 127, in create_es_index File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/__init__.py", line 345, in __init__ ssl_show_warn=ssl_show_warn, File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 105, in client_node_configs node_configs = hosts_to_node_configs(hosts) File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 154, in hosts_to_node_configs node_configs.append(host_mapping_to_node_config(host)) File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 221, in host_mapping_to_node_config return NodeConfig(**options) # type: ignore TypeError: __init__() missing 1 required positional argument: 'scheme' [1]+ 退出 1 nohup python3 custom_service.pyc > service.log 2>&1 custom_service_pyc can't running
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4366/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4366/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4363
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4363/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4363/comments
https://api.github.com/repos/huggingface/datasets/issues/4363/events
https://github.com/huggingface/datasets/issues/4363
1,238,897,652
I_kwDODunzps5J2BP0
4,363
The dataset preview is not available for this split.
{ "avatar_url": "https://avatars.githubusercontent.com/u/7584674?v=4", "events_url": "https://api.github.com/users/roholazandie/events{/privacy}", "followers_url": "https://api.github.com/users/roholazandie/followers", "following_url": "https://api.github.com/users/roholazandie/following{/other_user}", "gists_url": "https://api.github.com/users/roholazandie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/roholazandie", "id": 7584674, "login": "roholazandie", "node_id": "MDQ6VXNlcjc1ODQ2NzQ=", "organizations_url": "https://api.github.com/users/roholazandie/orgs", "received_events_url": "https://api.github.com/users/roholazandie/received_events", "repos_url": "https://api.github.com/users/roholazandie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/roholazandie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/roholazandie/subscriptions", "type": "User", "url": "https://api.github.com/users/roholazandie", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url...
null
[ "Hi! A dataset has to be streamable to work with the viewer. I did a quick test, and yours is, so this might be a bug in the viewer. cc @severo \r\n", "Looking at it. The message is now:\r\n\r\n```\r\nMessage: cannot cache function '__shear_dense': no locator available for file '/src/services/worker/.venv/...
2022-05-17T16:34:43Z
2022-06-08T12:32:10Z
2022-06-08T09:26:56Z
NONE
null
null
null
null
I have uploaded the corpus developed by our lab in the speech domain to huggingface [datasets](https://huggingface.co/datasets/Roh/ryanspeech). You can read about the companion paper accepted in interspeech 2021 [here](https://arxiv.org/abs/2106.08468). The dataset works fine but I can't make the dataset preview work. It gives me the following error that I don't understand. Can you help me to begin debugging it? ``` Status code: 400 Exception: AttributeError Message: 'NoneType' object has no attribute 'split' ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4363/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4363/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4361
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4361/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4361/comments
https://api.github.com/repos/huggingface/datasets/issues/4361/events
https://github.com/huggingface/datasets/issues/4361
1,238,671,931
I_kwDODunzps5J1KI7
4,361
`udhr` doesn't load, dataset checksum mismatch
{ "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leondz", "id": 121934, "login": "leondz", "node_id": "MDQ6VXNlcjEyMTkzNA==", "organizations_url": "https://api.github.com/users/leondz/orgs", "received_events_url": "https://api.github.com/users/leondz/received_events", "repos_url": "https://api.github.com/users/leondz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "type": "User", "url": "https://api.github.com/users/leondz", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2022-05-17T13:47:09Z
2022-06-08T19:11:21Z
2022-06-08T19:11:21Z
CONTRIBUTOR
null
null
null
null
## Describe the bug Loading `udhr` fails due to a checksum mismatch for some source files. Looks like both of the source files on unicode.org have changed: size + checksum in datasets repo: ``` (hfdev) leon@blade:~/datasets/datasets/udhr$ jq .default.download_checksums < dataset_infos.json { "https://unicode.org/udhr/assemblies/udhr_xml.zip": { "num_bytes": 2273633, "checksum": "0565fa62c2ff155b84123198bcc967edd8c5eb9679eadc01e6fb44a5cf730fee" }, "https://unicode.org/udhr/assemblies/udhr_txt.zip": { "num_bytes": 2107471, "checksum": "087b474a070dd4096ae3028f9ee0b30dcdcb030cc85a1ca02e143be46327e5e5" } } ``` size + checksum regenerated from current source files: ``` (hfdev) leon@blade:~/datasets/datasets/udhr$ rm dataset_infos.json (hfdev) leon@blade:~/datasets/datasets/udhr$ datasets-cli test --save_infos udhr.py Using custom data configuration default Testing builder 'default' (1/1) Downloading and preparing dataset udhn/default (download: 4.18 MiB, generated: 6.15 MiB, post-processed: Unknown size, total: 10.33 MiB) to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66... Dataset udhn downloaded and prepared to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66. Subsequent calls will reuse this data. 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 686.69it/s] Dataset Infos file saved at dataset_infos.json Test successful. (hfdev) leon@blade:~/datasets/datasets/udhr$ jq .default.download_checksums < dataset_infos.json { "https://unicode.org/udhr/assemblies/udhr_xml.zip": { "num_bytes": 2389690, "checksum": "a3350912790196c6e1b26bfd1c8a50e8575f5cf185922ecd9bd15713d7d21438" }, "https://unicode.org/udhr/assemblies/udhr_txt.zip": { "num_bytes": 2215441, "checksum": "cb87ecb25b56f34e4fd6f22b323000524fd9c06ae2a29f122b048789cf17e9fe" } } (hfdev) leon@blade:~/datasets/datasets/udhr$ ``` --- is unicode.org a sustainable hosting solution for this dataset? ## Steps to reproduce the bug ```python from datasets import load_dataset udhr = load_dataset("udhr") ``` ## Expected results That a Dataset object containing the UDHR data will be returned. ## Actual results ``` >>> d = load_dataset('udhr') Using custom data configuration default Downloading and preparing dataset udhn/default (download: 4.18 MiB, generated: 6.15 MiB, post-processed: Unknown size, total: 10.33 MiB) to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/leon/.local/lib/python3.9/site-packages/datasets/load.py", line 1731, in load_dataset builder_instance.download_and_prepare( File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 613, in download_and_prepare self._download_and_prepare( File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 1117, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 684, in _download_and_prepare verify_checksums( File "/home/leon/.local/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://unicode.org/udhr/assemblies/udhr_xml.zip', 'https://unicode.org/udhr/assemblies/udhr_txt.zip'] >>> ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.1 commit/4110fb6034f79c5fb470cf1043ff52180e9c63b7 - Platform: Linux Ubuntu 20.04 - Python version: 3.9.12 - PyArrow version: 8.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4361/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4361/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4358
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4358/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4358/comments
https://api.github.com/repos/huggingface/datasets/issues/4358/events
https://github.com/huggingface/datasets/issues/4358
1,237,147,692
I_kwDODunzps5JvWAs
4,358
Missing dataset tags and sections in some dataset cards
{ "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sashavor", "id": 14205986, "login": "sashavor", "node_id": "MDQ6VXNlcjE0MjA1OTg2", "organizations_url": "https://api.github.com/users/sashavor/orgs", "received_events_url": "https://api.github.com/users/sashavor/received_events", "repos_url": "https://api.github.com/users/sashavor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "type": "User", "url": "https://api.github.com/users/sashavor", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "@lhoestq I can take this issue. Please can you point out to me where I can find the other positional arguments?", "Hi @RohitRathore1 :)\r\n\r\nYou can find all the YAML tags in the tagging app here: https://hf.co/spaces/huggingface/datasets-tagging). They're all passed as arguments to a DatasetMetadata object us...
2022-05-16T13:18:16Z
2022-05-30T15:36:52Z
null
CONTRIBUTOR
null
null
null
null
Summary of CircleCI errors for different dataset metadata: - **BoolQ**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids' - **Conllpp**: expected some content in section `Citation Information` but it is empty. - **GLUE**: 'annotations_creators', 'language_creators', 'source_datasets' :['unknown'] are not registered tags - **ConLL2003**: field 'task_ids': ['part-of-speech-tagging'] are not registered tags for 'task_ids' - **Hate_speech18:** Expected some content in section `Data Instances` but it is empty, Expected some content in section `Data Splits` but it is empty - **Jjigsaw_toxicity_pred**: `Citation Information` but it is empty. - **LIAR** : `Data Instances`,`Data Fields`, `Data Splits`, `Citation Information` are empty. - **MSRA NER** : Dataset Summary`, `Data Instances`, `Data Fields`, `Data Splits`, `Citation Information` are empty. - **sem_eval_2010_task_8**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids' - **sms_spam**: `Data Instances` and`Data Splits` are empty. - **Quora** : Expected some content in section `Citation Information` but it is empty, missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids' - **sentiment140**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4358/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4358/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4354
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4354/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4354/comments
https://api.github.com/repos/huggingface/datasets/issues/4354/events
https://github.com/huggingface/datasets/issues/4354
1,236,404,383
I_kwDODunzps5Jsgif
4,354
Problems with WMT dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8884008?v=4", "events_url": "https://api.github.com/users/eldarkurtic/events{/privacy}", "followers_url": "https://api.github.com/users/eldarkurtic/followers", "following_url": "https://api.github.com/users/eldarkurtic/following{/other_user}", "gists_url": "https://api.github.com/users/eldarkurtic/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eldarkurtic", "id": 8884008, "login": "eldarkurtic", "node_id": "MDQ6VXNlcjg4ODQwMDg=", "organizations_url": "https://api.github.com/users/eldarkurtic/orgs", "received_events_url": "https://api.github.com/users/eldarkurtic/received_events", "repos_url": "https://api.github.com/users/eldarkurtic/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eldarkurtic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eldarkurtic/subscriptions", "type": "User", "url": "https://api.github.com/users/eldarkurtic", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "2edb81", "default": false, "descrip...
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", ...
null
[ "Hi! Yes, the docs are outdated. Expect this to be fixed soon. \r\n\r\nIn the meantime, you can try to fix the issue yourself.\r\n\r\nThese are the configs/language pairs supported by `wmt15` from which you can choose:\r\n* `cs-en` (Czech - English)\r\n* `de-en` (German - English)\r\n* `fi-en` (Finnish- English)\r\...
2022-05-15T20:58:26Z
2022-07-11T14:54:02Z
2022-07-11T14:54:01Z
NONE
null
null
null
null
## Describe the bug I am trying to load WMT15 dataset and to define which data-sources to use for train/validation/test splits, but unfortunately it seems that the official documentation at [https://huggingface.co/datasets/wmt15#:~:text=Versions%20exists%20for,wmt_translate%22%2C%20config%3Dconfig)](https://huggingface.co/datasets/wmt15#:~:text=Versions%20exists%20for,wmt_translate%22%2C%20config%3Dconfig)) doesn't work anymore. ## Steps to reproduce the bug ```shell >>> import datasets >>> a = datasets.translate.wmt.WmtConfig() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'datasets' has no attribute 'translate' >>> a = datasets.wmt.WmtConfig() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'datasets' has no attribute 'wmt' ``` ## Expected results To load WMT15 with given data-sources. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux-5.10.0-10-amd64-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4354/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4354/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4352
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4352/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4352/comments
https://api.github.com/repos/huggingface/datasets/issues/4352/events
https://github.com/huggingface/datasets/issues/4352
1,236,086,170
I_kwDODunzps5JrS2a
4,352
When using `dataset.map()` if passed `Features` types do not match what is returned from the mapped function, execution does not except in an obvious way
{ "avatar_url": "https://avatars.githubusercontent.com/u/99206017?v=4", "events_url": "https://api.github.com/users/plamb-viso/events{/privacy}", "followers_url": "https://api.github.com/users/plamb-viso/followers", "following_url": "https://api.github.com/users/plamb-viso/following{/other_user}", "gists_url": "https://api.github.com/users/plamb-viso/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/plamb-viso", "id": 99206017, "login": "plamb-viso", "node_id": "U_kgDOBenDgQ", "organizations_url": "https://api.github.com/users/plamb-viso/orgs", "received_events_url": "https://api.github.com/users/plamb-viso/received_events", "repos_url": "https://api.github.com/users/plamb-viso/repos", "site_admin": false, "starred_url": "https://api.github.com/users/plamb-viso/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/plamb-viso/subscriptions", "type": "User", "url": "https://api.github.com/users/plamb-viso", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Hi ! Thanks for reporting :) `datasets` usually returns a `pa.lib.ArrowInvalid` error if the feature types don't match.\r\n\r\nIt would be awesome if we had a way to reproduce the `OverflowError` in this case, to better understand what happened and be able to provide the best error message" ]
2022-05-14T17:55:15Z
2022-05-16T15:09:17Z
null
NONE
null
null
null
null
## Describe the bug Recently I was trying to using `.map()` to preprocess a dataset. I defined the expected Features and passed them into `.map()` like `dataset.map(preprocess_data, features=features)`. My expected `Features` keys matched what came out of `preprocess_data`, but the types i had defined for them did not match the types that came back. Because of this, i ended up in tracebacks deep inside arrow_dataset.py and arrow_writer.py with exceptions that [did not make clear what the problem was](https://github.com/huggingface/datasets/issues/4349). In short i ended up with overflows and the OS killing processes when Arrow was attempting to write. It wasn't until I dug into `def write_batch` and the loop that loops over cols that I figured out what was going on. It seems like `.map()` could set a boolean that it's checked that for at least 1 instance from the dataset, the returned data's types match the types provided by the `features` param and error out with a clear exception if they don't. This would make the cause of the issue much more understandable and save people time. This could be construed as a feature but it feels more like a bug to me. ## Steps to reproduce the bug I don't have explicit code to repro the bug, but ill show an example Code prior to the fix: ```python def preprocess(examples): # returns an encoded data dict with keys that match the features, but the types do not match ... def get_encoded_data(data): dataset = Dataset.from_pandas(data) unique_labels = data['audit_type'].unique().tolist() features = Features({ 'image': Array3D(dtype="uint8", shape=(3, 224, 224))), 'input_ids': Sequence(feature=Value(dtype='int64'))), 'attention_mask': Sequence(Value(dtype='int64'))), 'token_type_ids': Sequence(Value(dtype='int64'))), 'bbox': Array2D(dtype="int64", shape=(512, 4))), 'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels), }) encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names) ``` The Features set that fixed it: ```python features = Features({ 'image': Sequence(Array3D(dtype="uint8", shape=(3, 224, 224))), 'input_ids': Sequence(Sequence(feature=Value(dtype='int64'))), 'attention_mask': Sequence(Sequence(Value(dtype='int64'))), 'token_type_ids': Sequence(Sequence(Value(dtype='int64'))), 'bbox': Sequence(Array2D(dtype="int64", shape=(512, 4))), 'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels), }) ``` The difference between my original code (which was based on documentation) and the working code is the addition of the `Sequence(...)` to 4/5 features as I am working with paginated data and the doc examples are not. ## Expected results Dataset.map() attempts to validate the data types for each Feature on the first iteration and errors out if they are not validated. ## Actual results Specify the actual results or traceback. Based on the value of `writer_batch_size`, execution errors out when Arrow attempts to write because the types do not match, though its error messages dont make this obvious Example errors: ``` OverflowError: There was an overflow with type <class 'list'>. Try to reduce writer_batch_size to have batches smaller than 2GB. (offset overflow while concatenating arrays) ``` ``` zsh: killed python doc_classification.py UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> datasets version: 2.1.0 Platform: macOS-12.2.1-arm64-arm-64bit Python version: 3.9.12 PyArrow version: 6.0.1 Pandas version: 1.4.2
null
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/4352/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4352/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4351
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4351/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4351/comments
https://api.github.com/repos/huggingface/datasets/issues/4351/events
https://github.com/huggingface/datasets/issues/4351
1,235,950,209
I_kwDODunzps5JqxqB
4,351
Add optional progress bar for .save_to_disk(..) and .load_from_disk(..) when working with remote filesystems
{ "avatar_url": "https://avatars.githubusercontent.com/u/5154447?v=4", "events_url": "https://api.github.com/users/Rexhaif/events{/privacy}", "followers_url": "https://api.github.com/users/Rexhaif/followers", "following_url": "https://api.github.com/users/Rexhaif/following{/other_user}", "gists_url": "https://api.github.com/users/Rexhaif/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Rexhaif", "id": 5154447, "login": "Rexhaif", "node_id": "MDQ6VXNlcjUxNTQ0NDc=", "organizations_url": "https://api.github.com/users/Rexhaif/orgs", "received_events_url": "https://api.github.com/users/Rexhaif/received_events", "repos_url": "https://api.github.com/users/Rexhaif/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Rexhaif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rexhaif/subscriptions", "type": "User", "url": "https://api.github.com/users/Rexhaif", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi! I like this idea. For consistency with `load_dataset`, we can use `fsspec`'s `TqdmCallback` in `.load_from_disk` to monitor the number of bytes downloaded, and in `.save_to_disk`, we can track the number of saved shards for consistency with `push_to_hub` (after we implement https://github.com/huggingface/data...
2022-05-14T11:30:42Z
2022-12-14T18:22:59Z
2022-12-14T18:22:59Z
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** When working with large datasets stored on remote filesystems(such as s3), the process of uploading a dataset could take really long time. For instance: I was uploading a re-processed version of wmt17 en-ru to my s3 bucket and it took like 35 minutes(and that's given that I have a fiber optic connection). The only output during that process was a progress bar for flattening indices and then ~35 minutes of complete silence. **Describe the solution you'd like** I want to be able to enable a progress bar when calling .save_to_disk(..) and .load_from_disk(..), it would track either amount of bytes sent/received or amount of records written/loaded, and will give some ETA. Basically just tqdm. **Describe alternatives you've considered** - Save dataset to tmp folder at the disk and then upload it using custom wrapper over botocore, which will work with progress bar, like [this](https://alexwlchan.net/2021/04/s3-progress-bars/).
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4351/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4351/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4349
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4349/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4349/comments
https://api.github.com/repos/huggingface/datasets/issues/4349/events
https://github.com/huggingface/datasets/issues/4349
1,235,474,765
I_kwDODunzps5Jo9lN
4,349
Dataset.map()'s fails at any value of parameter writer_batch_size
{ "avatar_url": "https://avatars.githubusercontent.com/u/99206017?v=4", "events_url": "https://api.github.com/users/plamb-viso/events{/privacy}", "followers_url": "https://api.github.com/users/plamb-viso/followers", "following_url": "https://api.github.com/users/plamb-viso/following{/other_user}", "gists_url": "https://api.github.com/users/plamb-viso/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/plamb-viso", "id": 99206017, "login": "plamb-viso", "node_id": "U_kgDOBenDgQ", "organizations_url": "https://api.github.com/users/plamb-viso/orgs", "received_events_url": "https://api.github.com/users/plamb-viso/received_events", "repos_url": "https://api.github.com/users/plamb-viso/repos", "site_admin": false, "starred_url": "https://api.github.com/users/plamb-viso/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/plamb-viso/subscriptions", "type": "User", "url": "https://api.github.com/users/plamb-viso", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Note that this same issue occurs even if i preprocess with the more default way of tokenizing that uses LayoutLMv2Processor's internal OCR:\r\n\r\n```python\r\n feature_extractor = LayoutLMv2FeatureExtractor()\r\n tokenizer = LayoutLMv2Tokenizer.from_pretrained(\"microsoft/layoutlmv2-base-uncased\")\...
2022-05-13T16:55:12Z
2022-06-02T12:51:11Z
2022-05-14T15:08:08Z
NONE
null
null
null
null
## Describe the bug If the the value of `writer_batch_size` is less than the total number of instances in the dataset it will fail at that same number of instances. If it is greater than the total number of instances, it fails on the last instance. Context: I am attempting to fine-tune a pre-trained HuggingFace transformers model called LayoutLMv2. This model takes three inputs: document images, words and word bounding boxes. [The Processor for this model has two options](https://huggingface.co/docs/transformers/model_doc/layoutlmv2#usage-layoutlmv2processor), the default is passing a document to the Processor and allowing it to create images of the document and use PyTesseract to perform OCR and generate words/bounding boxes. The other option is to provide `revision="no_ocr"` to the pre-trained model which allows you to use your own OCR results (in my case, Amazon Textract) so you have to provide the image, words and bounding boxes yourself. I am using this second option which might be good context for the bug. I am using the Dataset.map() paradigm to create these three inputs, encode them and save the dataset. Note that my documents (data instances) on average are fairly large and can range from 1 page up to 300 pages. Code I am using is provided below ## Steps to reproduce the bug I do not have explicit sample code, but I will paste the code I'm using in case reading it helps. When `.map()` is called, the dataset has 2933 rows, many of which represent large pdf documents. ```python def get_encoded_data(data): dataset = Dataset.from_pandas(data) unique_labels = data['label'].unique() features = Features({ 'image': Array3D(dtype="int64", shape=(3, 224, 224)), 'input_ids': Sequence(feature=Value(dtype='int64')), 'attention_mask': Sequence(Value(dtype='int64')), 'token_type_ids': Sequence(Value(dtype='int64')), 'bbox': Array2D(dtype="int64", shape=(512, 4)), 'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels), }) encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names, writer_batch_size=dataset.num_rows+1) encoded_dataset.save_to_disk(TRAINING_DATA_PATH + ENCODED_DATASET_NAME) encoded_dataset.set_format(type="torch") return encoded_dataset ``` ```python PROCESSOR = LayoutLMv2Processor.from_pretrained(MODEL_PATH, revision="no_ocr", use_fast=False) def preprocess_data(examples): directory = os.path.join(FILES_PATH, examples['file_location']) images_dir = os.path.join(directory, PDF_IMAGE_DIR) textract_response_path = os.path.join(directory, 'textract.json') doc_meta_path = os.path.join(directory, 'doc_meta.json') textract_document = get_textract_document(textract_response_path, doc_meta_path) images, words, bboxes = get_doc_training_data(images_dir, textract_document) encoded_inputs = PROCESSOR(images, words, boxes=bboxes, padding="max_length", truncation=True) # https://github.com/NielsRogge/Transformers-Tutorials/issues/36 encoded_inputs["image"] = np.array(encoded_inputs["image"]) encoded_inputs["label"] = examples['label_id'] return encoded_inputs ``` ## Expected results My expectation is that `writer_batch_size` allows one to simply trade off performance and memory requirements, not that it must be a specific number for `.map()` to function correctly. ## Actual results If writer_batch_size is set to a value less than the number of rows, I get either: ``` OverflowError: There was an overflow with type <class 'list'>. Try to reduce writer_batch_size to have batches smaller than 2GB. (offset overflow while concatenating arrays) ``` or simply ``` zsh: killed python doc_classification.py UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown ``` If it is greater than the number of rows, i get the `zsh: killed` error above ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.1.0 - Platform: macOS-12.2.1-arm64-arm-64bit - Python version: 3.9.12 - PyArrow version: 6.0.1 - Pandas version: 1.4.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/99206017?v=4", "events_url": "https://api.github.com/users/plamb-viso/events{/privacy}", "followers_url": "https://api.github.com/users/plamb-viso/followers", "following_url": "https://api.github.com/users/plamb-viso/following{/other_user}", "gists_url": "https://api.github.com/users/plamb-viso/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/plamb-viso", "id": 99206017, "login": "plamb-viso", "node_id": "U_kgDOBenDgQ", "organizations_url": "https://api.github.com/users/plamb-viso/orgs", "received_events_url": "https://api.github.com/users/plamb-viso/received_events", "repos_url": "https://api.github.com/users/plamb-viso/repos", "site_admin": false, "starred_url": "https://api.github.com/users/plamb-viso/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/plamb-viso/subscriptions", "type": "User", "url": "https://api.github.com/users/plamb-viso", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4349/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4349/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4348
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4348/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4348/comments
https://api.github.com/repos/huggingface/datasets/issues/4348/events
https://github.com/huggingface/datasets/issues/4348
1,235,432,976
I_kwDODunzps5JozYQ
4,348
`inspect` functions can't fetch dataset script from the Hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", ...
null
[ "Hi, thanks for reporting! `git bisect` points to #2986 as the PR that introduced the bug. Since then, there have been some additional changes to the loading logic, and in the current state, `force_local_path` (set via `local_path`) forbids pulling a script from the internet instead of downloading it: https://githu...
2022-05-13T16:08:26Z
2022-06-09T10:26:06Z
2022-06-09T10:26:06Z
MEMBER
null
null
null
null
The `inspect_dataset` and `inspect_metric` functions are unable to retrieve a dataset or metric script from the Hub and store it locally at the specified `local_path`: ```py >>> from datasets import inspect_dataset >>> inspect_dataset('rotten_tomatoes', local_path='path/to/my/local/folder') FileNotFoundError: Couldn't find a dataset script at /content/rotten_tomatoes/rotten_tomatoes.py or any data file in the same directory. ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4348/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4348/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4346
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4346/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4346/comments
https://api.github.com/repos/huggingface/datasets/issues/4346/events
https://github.com/huggingface/datasets/issues/4346
1,235,067,062
I_kwDODunzps5JnaC2
4,346
GH Action to build documentation never ends
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2022-05-13T10:44:44Z
2022-05-13T11:22:00Z
2022-05-13T11:22:00Z
MEMBER
null
null
null
null
## Describe the bug See: https://github.com/huggingface/datasets/runs/6418035586?check_suite_focus=true I finally forced the cancel of the workflow.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4346/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4346/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4343
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4343/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4343/comments
https://api.github.com/repos/huggingface/datasets/issues/4343/events
https://github.com/huggingface/datasets/issues/4343
1,234,864,168
I_kwDODunzps5Jmogo
4,343
Metrics documentation is not accessible in the datasets doc UI
{ "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fxmarty", "id": 9808326, "login": "fxmarty", "node_id": "MDQ6VXNlcjk4MDgzMjY=", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "repos_url": "https://api.github.com/users/fxmarty/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "type": "User", "url": "https://api.github.com/users/fxmarty", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "d722e8", "default": fals...
closed
false
null
[]
null
[ "Hey @fxmarty :) Yes we are working on showing the docs of all the metrics on the Hugging face website. If you want to follow the advancements you can check the [evaluate](https://github.com/huggingface/evaluate) repository cc @lvwerra @sashavor " ]
2022-05-13T07:46:30Z
2022-06-03T08:50:25Z
2022-06-03T08:50:25Z
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** Search for a metric name like "seqeval" yields no results on https://huggingface.co/docs/datasets/master/en/index . One needs to go look in `datasets/metrics/README.md` to find the doc. Even in the `README.md`, it can be hard to understand what the metric expects as an input, for example for `squad` there is a [key `id`](https://github.com/huggingface/datasets/blob/1a4c185663a6958f48ec69624473fdc154a36a9d/metrics/squad/squad.py#L42) documented only in the function doc but not in the `README.md`, and one needs to go look into the code to understand what the metric expects. **Describe the solution you'd like** Have the documentation for metrics appear as well in the doc UI, e.g. this https://github.com/huggingface/datasets/blob/1a4c185663a6958f48ec69624473fdc154a36a9d/metrics/squad/squad.py#L21-L63 I know there are plans to migrate metrics to the evaluate library, but just pointing this out.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4343/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4343/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4341
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4341/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4341/comments
https://api.github.com/repos/huggingface/datasets/issues/4341/events
https://github.com/huggingface/datasets/issues/4341
1,234,739,703
I_kwDODunzps5JmKH3
4,341
Failing CI on Windows for sari and wiki_split metrics
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[]
2022-05-13T04:55:17Z
2022-05-13T05:47:41Z
2022-05-13T05:47:41Z
MEMBER
null
null
null
null
## Describe the bug Our CI is failing from yesterday on Windows for metrics: sari and wiki_split ``` FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_sari - ... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_wiki_split ``` See: https://app.circleci.com/pipelines/github/huggingface/datasets/11928/workflows/79daa5e7-65c9-4e85-829b-00d2bfbd076a/jobs/71594
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4341/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4341/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4327
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4327/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4327/comments
https://api.github.com/repos/huggingface/datasets/issues/4327/events
https://github.com/huggingface/datasets/issues/4327
1,233,840,020
I_kwDODunzps5JiueU
4,327
`wikipedia` pre-processed datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/81152?v=4", "events_url": "https://api.github.com/users/vpj/events{/privacy}", "followers_url": "https://api.github.com/users/vpj/followers", "following_url": "https://api.github.com/users/vpj/following{/other_user}", "gists_url": "https://api.github.com/users/vpj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vpj", "id": 81152, "login": "vpj", "node_id": "MDQ6VXNlcjgxMTUy", "organizations_url": "https://api.github.com/users/vpj/orgs", "received_events_url": "https://api.github.com/users/vpj/received_events", "repos_url": "https://api.github.com/users/vpj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vpj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vpj/subscriptions", "type": "User", "url": "https://api.github.com/users/vpj", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi @vpj, thanks for reporting.\r\n\r\nI'm sorry, but I can't reproduce your bug: I load \"20220301.simple\"in 9 seconds:\r\n```shell\r\ntime python -c \"from datasets import load_dataset; load_dataset('wikipedia', '20220301.simple')\"\r\n\r\nDownloading and preparing dataset wikipedia/20220301.simple (download: 22...
2022-05-12T11:25:42Z
2022-08-31T08:26:57Z
2022-08-31T08:26:57Z
NONE
null
null
null
null
## Describe the bug [Wikipedia](https://huggingface.co/datasets/wikipedia) dataset readme says that certain subsets are preprocessed. However it seems like they are not available. When I try to load them it takes a really long time, and it seems like it's processing them. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("wikipedia", "20220301.en") ``` ## Expected results To load the dataset ## Actual results Takes a very long time to load (after downloading) After `Downloading data files: 100%`. It takes hours and gets killed. Tried `wikipedia.simple` and it got processed after ~30mins.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4327/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4327/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4325
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4325/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4325/comments
https://api.github.com/repos/huggingface/datasets/issues/4325/events
https://github.com/huggingface/datasets/issues/4325
1,233,812,191
I_kwDODunzps5Jinrf
4,325
Dataset Viewer issue for strombergnlp/offenseval_2020, strombergnlp/polstance
{ "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leondz", "id": 121934, "login": "leondz", "node_id": "MDQ6VXNlcjEyMTkzNA==", "organizations_url": "https://api.github.com/users/leondz/orgs", "received_events_url": "https://api.github.com/users/leondz/received_events", "repos_url": "https://api.github.com/users/leondz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "type": "User", "url": "https://api.github.com/users/leondz", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url...
null
[ "Not sure if it's related... I was going to raise an issue for https://huggingface.co/datasets/domenicrosati/TruthfulQA which also has the same issue... https://huggingface.co/datasets/domenicrosati/TruthfulQA/viewer/domenicrosati--TruthfulQA/train \r\n\r\n", "Yes, it's related. The backend behind the dataset vie...
2022-05-12T10:59:08Z
2022-05-13T10:57:15Z
2022-05-13T10:57:02Z
CONTRIBUTOR
null
null
null
null
### Link https://huggingface.co/datasets/strombergnlp/offenseval_2020/viewer/ar/train ### Description The viewer isn't running for these two datasets. I left it overnight because a wait sometimes helps things get loaded, and the error messages have all gone, but the datasets are still turning up blank in viewer. Maybe it needs a bit more time. * https://huggingface.co/datasets/strombergnlp/polstance/viewer/PolStance/train * https://huggingface.co/datasets/strombergnlp/offenseval_2020/viewer/ar/train While offenseval_2020 is gated w. prompt, the other gated previews I have run fine in Viewer, e.g. https://huggingface.co/datasets/strombergnlp/shaj , so I'm a bit stumped! ### Owner Yes
{ "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leondz", "id": 121934, "login": "leondz", "node_id": "MDQ6VXNlcjEyMTkzNA==", "organizations_url": "https://api.github.com/users/leondz/orgs", "received_events_url": "https://api.github.com/users/leondz/received_events", "repos_url": "https://api.github.com/users/leondz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "type": "User", "url": "https://api.github.com/users/leondz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4325/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4325/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4324
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4324/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4324/comments
https://api.github.com/repos/huggingface/datasets/issues/4324/events
https://github.com/huggingface/datasets/issues/4324
1,233,780,870
I_kwDODunzps5JigCG
4,324
Support >1 PWC dataset per dataset card
{ "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leondz", "id": 121934, "login": "leondz", "node_id": "MDQ6VXNlcjEyMTkzNA==", "organizations_url": "https://api.github.com/users/leondz/orgs", "received_events_url": "https://api.github.com/users/leondz/received_events", "repos_url": "https://api.github.com/users/leondz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "type": "User", "url": "https://api.github.com/users/leondz", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "Hi @leondz, I agree it would be nice. We'll see what we can do ;)" ]
2022-05-12T10:29:07Z
2022-05-13T11:25:29Z
null
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** Some datasets cover more than one dataset on PapersWithCode. For example, the OffensEval 2020 challenge involved five languages, and there's one dataset to cover all five datasets, [`strombergnlp/offenseval_2020`](https://huggingface.co/datasets/strombergnlp/offenseval_2020). However, the yaml `paperswithcode_id:` dataset card entry only supports one value; when multiple are added, the PWC link disappears from the dataset page. Because the link from a PapersWithCode dataset to a Hugging Face Hub entry can't be entered manually and seems to be scraped, this means end users don't have a way of getting a dataset reader link to appear on all the PWC datasets supported by one HF Hub Dataset reader. It's not super unusual to have papers introduce multiple parallel variants of a dataset and would be handy to reflect this, so e.g. dataset maintainers can DRY, and so dataset users can keep what they're doing simple. **Describe the solution you'd like** I'd like `paperswithcode_id:` to support lists and be able to connect with multiple PWC datasets. **Describe alternatives you've considered** De-normalising the datasets on HF Hub to create multiple readers for each variation on a task, i.e. instead of a single `offenseval_2020`, having `offenseval_2020_ar`, `offenseval_2020_da`, `offenseval_2020_gr`, ... **Additional context** Hope that's enough **Priority** Low
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4324/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4324/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4323
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4323/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4323/comments
https://api.github.com/repos/huggingface/datasets/issues/4323/events
https://github.com/huggingface/datasets/issues/4323
1,233,634,928
I_kwDODunzps5Jh8Zw
4,323
Audio can not find value["bytes"]
{ "avatar_url": "https://avatars.githubusercontent.com/u/34292279?v=4", "events_url": "https://api.github.com/users/YooSungHyun/events{/privacy}", "followers_url": "https://api.github.com/users/YooSungHyun/followers", "following_url": "https://api.github.com/users/YooSungHyun/following{/other_user}", "gists_url": "https://api.github.com/users/YooSungHyun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/YooSungHyun", "id": 34292279, "login": "YooSungHyun", "node_id": "MDQ6VXNlcjM0MjkyMjc5", "organizations_url": "https://api.github.com/users/YooSungHyun/orgs", "received_events_url": "https://api.github.com/users/YooSungHyun/received_events", "repos_url": "https://api.github.com/users/YooSungHyun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/YooSungHyun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YooSungHyun/subscriptions", "type": "User", "url": "https://api.github.com/users/YooSungHyun", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "![image](https://user-images.githubusercontent.com/34292279/168063684-fff5c12a-8b1e-4c65-b18b-36100ab8a1af.png)\r\n\r\nthat is reason my bytes`s empty\r\nbut i have some confused why path prior is higher than bytes?\r\n\r\nif you can make bytes in _generate_examples , you don`t have to make bytes to path?\r\nbecau...
2022-05-12T08:31:58Z
2022-07-07T13:16:08Z
2022-07-07T13:16:08Z
CONTRIBUTOR
null
null
null
null
## Describe the bug I wrote down _generate_examples like: ![image](https://user-images.githubusercontent.com/34292279/168027186-2fe8b255-2cd8-4b9b-ab1e-8d5a7182979b.png) but where is the bytes? ![image](https://user-images.githubusercontent.com/34292279/168027330-f2496dd0-1d99-464c-b15c-bc57eee0415a.png) ## Expected results value["bytes"] is not None, so i can make datasets with bytes, not path ## bytes looks like: blah blah~~ \xfe\x03\x00\xfb\x06\x1c\x0bo\x074\x03\xaf\x01\x13\x04\xbc\x06\x8c\x05y\x05,\t7\x08\xaf\x03\xc0\xfe\xe8\xfc\x94\xfe\xb7\xfd\xea\xfa\xd5\xf9$\xf9>\xf9\x1f\xf8\r\xf5F\xf49\xf4\xda\xf5-\xf8\n\xf8k\xf8\x07\xfb\x18\xfd\xd9\xfdv\xfd"\xfe\xcc\x01\x1c\x04\x08\x04@\x04{\x06^\tf\t\x1e\x07\x8b\x06\x02\x08\x13\t\x07\x08 \x06g\x06"\x06\xa0\x03\xc6\x002\xff \xff\x1d\xff\x19\xfd?\xfb\xdb\xfa\xfc\xfa$\xfb}\xf9\xe5\xf7\xf9\xf7\xce\xf8.\xf9b\xf9\xc5\xf9\xc0\xfb\xfa\xfcP\xfc\xba\xfbQ\xfc1\xfe\x9f\xff\x12\x00\xa2\x00\x18\x02Z\x03\x02\x04\xb1\x03\xc5\x03W\x04\x82\x04\x8f\x04U\x04\xb6\x04\x10\x05{\x04\x83\x02\x17\x01\x1d\x00\xa0\xff\xec\xfe\x03\xfe#\xfe\xc2\xfe2\xff\xe6\xfe\x9a\xfe~\x01\x91\x08\xb3\tU\x05\x10\x024\x02\xe4\x05\xa8\x07\xa7\x053\x07I\n\x91\x07v\x02\x95\xfd\xbb\xfd\x96\xff\x01\xfe\x1e\xfb\xbb\xf9S\xf8!\xf8\xf4\xf5\xd6\xf3\xf7\xf3l\xf4d\xf6l\xf7d\xf6b\xf7\xc1\xfa(\xfd\xcf\xfd*\xfdq\xfe\xe9\x01\xa8\x03t\x03\x17\x04B\x07\xce\t\t\t\xeb\x06\x0c\x07\x95\x08\x92\t\xbc\x07O\x06\xfb\x06\xd2\x06U\x04\x00\x02\x92\x00\xdc\x00\x84\x00 \xfeT\xfc\xf1\xfb\x82\xfc\x97\xfb}\xf9\x00\xf8_\xf8\x0b\xf9\xe5\xf8\xe2\xf7\xaa\xf8\xb2\xfa\x10\xfbl\xfa\xf5\xf9Y\xfb\xc0\xfd\xe8\xfe\xec\xfe1\x00\xad\x01\xec\x02E\x03\x13\x03\x9b\x03o\x04\xce\x04\xa8\x04\xb2\x04\x1b\x05\xc0\x05\xd2\x04\xe8\x02z\x01\xbe\x00\xae\x00\x07\x00$\xff|\xff\x8e\x00\x13\x00\x10\xff\x98\xff0\x05{\x0b\x05\t\xaa\x03\x82\x01n\x03 blah blah~~ that function not return None ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:2.2.1 - Platform:ubuntu 18.04 - Python version:3.6.9 - PyArrow version:6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4323/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4323/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4320
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4320/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4320/comments
https://api.github.com/repos/huggingface/datasets/issues/4320/events
https://github.com/huggingface/datasets/issues/4320
1,233,208,864
I_kwDODunzps5JgUYg
4,320
Multi-news dataset loader attempts to strip wrong character from beginning of summaries
{ "avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4", "events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}", "followers_url": "https://api.github.com/users/JohnGiorgi/followers", "following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}", "gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JohnGiorgi", "id": 8917831, "login": "JohnGiorgi", "node_id": "MDQ6VXNlcjg5MTc4MzE=", "organizations_url": "https://api.github.com/users/JohnGiorgi/orgs", "received_events_url": "https://api.github.com/users/JohnGiorgi/received_events", "repos_url": "https://api.github.com/users/JohnGiorgi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions", "type": "User", "url": "https://api.github.com/users/JohnGiorgi", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting :)\r\n\r\nThis dataset was simply converted from [tensorflow datasets](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/summarization/multi_news.py)\r\n\r\nI think we can just remove the `.strip(\"- \")` and keep this character", "Cool! I made a PR." ]
2022-05-11T21:36:41Z
2022-05-16T13:52:10Z
2022-05-16T13:52:10Z
CONTRIBUTOR
null
null
null
null
## Describe the bug The `multi_news.py` data loader has [a line which attempts to strip `"- "` from the beginning of summaries](https://github.com/huggingface/datasets/blob/aa743886221d76afb409d263e1b136e7a71fe2b4/datasets/multi_news/multi_news.py#L97). The actual character in the multi-news dataset, however, is `"– "`, which is different, e.g. `"– " != "- "`. I would have just opened a PR to fix the mistake, but I am wondering what the motivation for stripping this character is? AFAICT most approaches just leave it in, e.g. the current SOTA on this dataset, [PRIMERA](https://huggingface.co/allenai/PRIMERA-multinews) (you can see its in the generated summaries of the model in their [example notebook](https://github.com/allenai/PRIMER/blob/main/Evaluation_Example.ipynb)). ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4320/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4320/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4310
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4310/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4310/comments
https://api.github.com/repos/huggingface/datasets/issues/4310/events
https://github.com/huggingface/datasets/issues/4310
1,231,319,815
I_kwDODunzps5JZHMH
4,310
Loading dataset with streaming: '_io.BufferedReader' object has no attribute 'loc'
{ "avatar_url": "https://avatars.githubusercontent.com/u/72745467?v=4", "events_url": "https://api.github.com/users/milmin/events{/privacy}", "followers_url": "https://api.github.com/users/milmin/followers", "following_url": "https://api.github.com/users/milmin/following{/other_user}", "gists_url": "https://api.github.com/users/milmin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/milmin", "id": 72745467, "login": "milmin", "node_id": "MDQ6VXNlcjcyNzQ1NDY3", "organizations_url": "https://api.github.com/users/milmin/orgs", "received_events_url": "https://api.github.com/users/milmin/received_events", "repos_url": "https://api.github.com/users/milmin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/milmin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/milmin/subscriptions", "type": "User", "url": "https://api.github.com/users/milmin", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists...
null
[]
2022-05-10T15:12:53Z
2022-05-11T16:46:31Z
2022-05-11T16:46:31Z
NONE
null
null
null
null
## Describe the bug Loading a datasets with `load_dataset` and `streaming=True` returns `AttributeError: '_io.BufferedReader' object has no attribute 'loc'`. Notice that loading with `streaming=False` works fine. In the following steps we load parquet files but the same happens with pickle files. The problem seems to come from `fsspec` lib, I put in the environment info also `s3fs` and `fsspec` versions since I'm loading from an s3 bucket. ## Steps to reproduce the bug ```python from datasets import load_dataset # path is the path to parquet files data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"} dataset = load_dataset("parquet", data_files=data_files, streaming=True) ``` ## Expected results A dataset object `datasets.dataset_dict.DatasetDict` ## Actual results ``` AttributeError Traceback (most recent call last) <command-562086> in <module> 11 12 data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"} ---> 13 dataset = load_dataset("parquet", data_files=data_files, streaming=True) /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1679 if streaming: 1680 extend_dataset_builder_for_streaming(builder_instance, use_auth_token=use_auth_token) -> 1681 return builder_instance.as_streaming_dataset( 1682 split=split, 1683 use_auth_token=use_auth_token, /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/builder.py in as_streaming_dataset(self, split, base_path, use_auth_token) 904 ) 905 self._check_manual_download(dl_manager) --> 906 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)} 907 # By default, return all splits 908 if split is None: /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/packaged_modules/parquet/parquet.py in _split_generators(self, dl_manager) 30 if not self.config.data_files: 31 raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}") ---> 32 data_files = dl_manager.download_and_extract(self.config.data_files) 33 if isinstance(data_files, (str, list, tuple)): 34 files = data_files /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in download_and_extract(self, url_or_urls) 798 799 def download_and_extract(self, url_or_urls): --> 800 return self.extract(self.download(url_or_urls)) 801 802 def iter_archive(self, urlpath_or_buf: Union[str, io.BufferedReader]) -> Iterable[Tuple]: /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in extract(self, path_or_paths) 776 777 def extract(self, path_or_paths): --> 778 urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True) 779 return urlpaths 780 /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc) 312 num_proc = 1 313 if num_proc <= 1 or len(iterable) <= num_proc: --> 314 mapped = [ 315 _single_map_nested((function, obj, types, None, True, None)) 316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0) 313 if num_proc <= 1 or len(iterable) <= num_proc: 314 mapped = [ --> 315 _single_map_nested((function, obj, types, None, True, None)) 316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) 317 ] /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args) 267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} 268 else: --> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] 270 if isinstance(data_struct, list): 271 return mapped /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0) 267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} 268 else: --> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] 270 if isinstance(data_struct, list): 271 return mapped /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args) 249 # Singleton first to spare some computation 250 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 251 return function(data_struct) 252 253 # Reduce logging to keep things readable in multiprocessing with tqdm /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _extract(self, urlpath) 781 def _extract(self, urlpath: str) -> str: 782 urlpath = str(urlpath) --> 783 protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token) 784 if protocol is None: 785 # no extraction /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol(urlpath, use_auth_token) 371 urlpath, kwargs = urlpath, {} 372 with fsspec.open(urlpath, **kwargs) as f: --> 373 return _get_extraction_protocol_with_magic_number(f) 374 375 /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol_with_magic_number(f) 335 def _get_extraction_protocol_with_magic_number(f) -> Optional[str]: 336 """read the magic number from a file-like object and return the compression protocol""" --> 337 prev_loc = f.loc 338 magic_number = f.read(MAGIC_NUMBER_MAX_LENGTH) 339 f.seek(prev_loc) /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/fsspec/implementations/local.py in __getattr__(self, item) 337 338 def __getattr__(self, item): --> 339 return getattr(self.f, item) 340 341 def __enter__(self): AttributeError: '_io.BufferedReader' object has no attribute 'loc' ``` ## Environment info - `datasets` version: 2.1.0 - Platform: Linux-5.4.0-1071-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.2 - `fsspec` version: 2021.08.1 - `s3fs` version: 2021.08.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4310/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4310/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4306
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4306/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4306/comments
https://api.github.com/repos/huggingface/datasets/issues/4306/events
https://github.com/huggingface/datasets/issues/4306
1,231,137,204
I_kwDODunzps5JYam0
4,306
`load_dataset` does not work with certain filename.
{ "avatar_url": "https://avatars.githubusercontent.com/u/57242693?v=4", "events_url": "https://api.github.com/users/whatever60/events{/privacy}", "followers_url": "https://api.github.com/users/whatever60/followers", "following_url": "https://api.github.com/users/whatever60/following{/other_user}", "gists_url": "https://api.github.com/users/whatever60/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/whatever60", "id": 57242693, "login": "whatever60", "node_id": "MDQ6VXNlcjU3MjQyNjkz", "organizations_url": "https://api.github.com/users/whatever60/orgs", "received_events_url": "https://api.github.com/users/whatever60/received_events", "repos_url": "https://api.github.com/users/whatever60/repos", "site_admin": false, "starred_url": "https://api.github.com/users/whatever60/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/whatever60/subscriptions", "type": "User", "url": "https://api.github.com/users/whatever60", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Never mind. It is because of the caching of datasets..." ]
2022-05-10T13:14:04Z
2022-05-10T18:58:36Z
2022-05-10T18:58:09Z
NONE
null
null
null
null
## Describe the bug This is a weird bug that took me some time to find out. I have a JSON dataset that I want to load with `load_dataset` like this: ``` data_files = dict(train="train.json.zip", val="val.json.zip") dataset = load_dataset("json", data_files=data_files, field="data") ``` ## Expected results No error. ## Actual results The val file is loaded as expected, but the train file throws JSON decoding error: ``` ╭──────────────────────────── Traceback (most recent call last) ────────────────────────────╮ │ <ipython-input-74-97947e92c100>:5 in <module> │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/load.py:1687 in │ │ load_dataset │ │ │ │ 1684 │ try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES │ │ 1685 │ │ │ 1686 │ # Download and prepare data │ │ ❱ 1687 │ builder_instance.download_and_prepare( │ │ 1688 │ │ download_config=download_config, │ │ 1689 │ │ download_mode=download_mode, │ │ 1690 │ │ ignore_verifications=ignore_verifications, │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:605 in │ │ download_and_prepare │ │ │ │ 602 │ │ │ │ │ │ except ConnectionError: │ │ 603 │ │ │ │ │ │ │ logger.warning("HF google storage unreachable. Downloa │ │ 604 │ │ │ │ │ if not downloaded_from_gcs: │ │ ❱ 605 │ │ │ │ │ │ self._download_and_prepare( │ │ 606 │ │ │ │ │ │ │ dl_manager=dl_manager, verify_infos=verify_infos, **do │ │ 607 │ │ │ │ │ │ ) │ │ 608 │ │ │ │ │ # Sync info │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:694 in │ │ _download_and_prepare │ │ │ │ 691 │ │ │ │ │ 692 │ │ │ try: │ │ 693 │ │ │ │ # Prepare split will record examples associated to the split │ │ ❱ 694 │ │ │ │ self._prepare_split(split_generator, **prepare_split_kwargs) │ │ 695 │ │ │ except OSError as e: │ │ 696 │ │ │ │ raise OSError( │ │ 697 │ │ │ │ │ "Cannot find data file. " │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:1151 in │ │ _prepare_split │ │ │ │ 1148 │ │ │ │ 1149 │ │ generator = self._generate_tables(**split_generator.gen_kwargs) │ │ 1150 │ │ with ArrowWriter(features=self.info.features, path=fpath) as writer: │ │ ❱ 1151 │ │ │ for key, table in logging.tqdm( │ │ 1152 │ │ │ │ generator, unit=" tables", leave=False, disable=True # not loggin │ │ 1153 │ │ │ ): │ │ 1154 │ │ │ │ writer.write_table(table) │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/tqdm/notebook.py:257 in │ │ __iter__ │ │ │ │ 254 │ │ │ 255 │ def __iter__(self): │ │ 256 │ │ try: │ │ ❱ 257 │ │ │ for obj in super(tqdm_notebook, self).__iter__(): │ │ 258 │ │ │ │ # return super(tqdm...) will not catch exception │ │ 259 │ │ │ │ yield obj │ │ 260 │ │ # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/tqdm/std.py:1183 in │ │ __iter__ │ │ │ │ 1180 │ │ # If the bar is disabled, then just walk the iterable │ │ 1181 │ │ # (note: keep this check outside the loop for performance) │ │ 1182 │ │ if self.disable: │ │ ❱ 1183 │ │ │ for obj in iterable: │ │ 1184 │ │ │ │ yield obj │ │ 1185 │ │ │ return │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/packaged_modules/j │ │ son/json.py:90 in _generate_tables │ │ │ │ 87 │ │ │ # If the file is one json object and if we need to look at the list of │ │ 88 │ │ │ if self.config.field is not None: │ │ 89 │ │ │ │ with open(file, encoding="utf-8") as f: │ │ ❱ 90 │ │ │ │ │ dataset = json.load(f) │ │ 91 │ │ │ │ │ │ 92 │ │ │ │ # We keep only the field we are interested in │ │ 93 │ │ │ │ dataset = dataset[self.config.field] │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/json/__init__.py:293 in load │ │ │ │ 290 │ To use a custom ``JSONDecoder`` subclass, specify it with the ``cls`` │ │ 291 │ kwarg; otherwise ``JSONDecoder`` is used. │ │ 292 │ """ │ │ ❱ 293 │ return loads(fp.read(), │ │ 294 │ │ cls=cls, object_hook=object_hook, │ │ 295 │ │ parse_float=parse_float, parse_int=parse_int, │ │ 296 │ │ parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/json/__init__.py:357 in loads │ │ │ │ 354 │ if (cls is None and object_hook is None and │ │ 355 │ │ │ parse_int is None and parse_float is None and │ │ 356 │ │ │ parse_constant is None and object_pairs_hook is None and not kw): │ │ ❱ 357 │ │ return _default_decoder.decode(s) │ │ 358 │ if cls is None: │ │ 359 │ │ cls = JSONDecoder │ │ 360 │ if object_hook is not None: │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/json/decoder.py:337 in decode │ │ │ │ 334 │ │ containing a JSON document). │ │ 335 │ │ │ │ 336 │ │ """ │ │ ❱ 337 │ │ obj, end = self.raw_decode(s, idx=_w(s, 0).end()) │ │ 338 │ │ end = _w(s, end).end() │ │ 339 │ │ if end != len(s): │ │ 340 │ │ │ raise JSONDecodeError("Extra data", s, end) │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/json/decoder.py:353 in raw_decode │ │ │ │ 350 │ │ │ │ 351 │ │ """ │ │ 352 │ │ try: │ │ ❱ 353 │ │ │ obj, end = self.scan_once(s, idx) │ │ 354 │ │ except StopIteration as err: │ │ 355 │ │ │ raise JSONDecodeError("Expecting value", s, err.value) from None │ │ 356 │ │ return obj, end │ ╰───────────────────────────────────────────────────────────────────────────────────────────╯ JSONDecodeError: Unterminated string starting at: line 85 column 20 (char 60051) ``` However, when I rename the `train.json.zip` to other names (like `training.json.zip`, or even to `train.json`), everything works fine; when I unzip the file to `train.json`, it works as well. ## Environment info ``` - `datasets` version: 2.1.0 - Platform: Linux-4.4.0-131-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 7.0.0 - Pandas version: 1.4.2 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/57242693?v=4", "events_url": "https://api.github.com/users/whatever60/events{/privacy}", "followers_url": "https://api.github.com/users/whatever60/followers", "following_url": "https://api.github.com/users/whatever60/following{/other_user}", "gists_url": "https://api.github.com/users/whatever60/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/whatever60", "id": 57242693, "login": "whatever60", "node_id": "MDQ6VXNlcjU3MjQyNjkz", "organizations_url": "https://api.github.com/users/whatever60/orgs", "received_events_url": "https://api.github.com/users/whatever60/received_events", "repos_url": "https://api.github.com/users/whatever60/repos", "site_admin": false, "starred_url": "https://api.github.com/users/whatever60/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/whatever60/subscriptions", "type": "User", "url": "https://api.github.com/users/whatever60", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4306/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4306/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4304
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4304/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4304/comments
https://api.github.com/repos/huggingface/datasets/issues/4304/events
https://github.com/huggingface/datasets/issues/4304
1,231,047,051
I_kwDODunzps5JYEmL
4,304
Language code search does direct matches
{ "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leondz", "id": 121934, "login": "leondz", "node_id": "MDQ6VXNlcjEyMTkzNA==", "organizations_url": "https://api.github.com/users/leondz/orgs", "received_events_url": "https://api.github.com/users/leondz/received_events", "repos_url": "https://api.github.com/users/leondz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "type": "User", "url": "https://api.github.com/users/leondz", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Thanks for reporting ! I forwarded the issue to the front-end team :)\r\n\r\nWill keep you posted !\r\n\r\nI also changed the tagging app to suggest two letters code for now." ]
2022-05-10T11:59:16Z
2022-05-10T12:38:42Z
null
CONTRIBUTOR
null
null
null
null
## Describe the bug Hi. Searching for bcp47 tags that are just the language prefix (e.g. `sq` or `da`) excludes datasets that have added extra information in their language metadata (e.g. `sq-AL` or `da-bornholm`). The example codes given in the [tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging) encourages addition of the additional codes ("_expected format is BCP47 tags separated for ';' e.g. 'en-US;fr-FR'_") but this would lead to those datasets being hidden in datasets search. ## Steps to reproduce the bug 1. Add a dataset using a variant tag (e.g. [`sq-AL`](https://huggingface.co/datasets?languages=languages:sq-AL)) 2. Look for datasets using the full code 3. Note that they're missing when just the language is searched for (e.g. [`sq`](https://huggingface.co/datasets?languages=languages:sq)) Some datasets are already affected by this - e.g. `AmazonScience/massive` is listed under `sq-AL` but not `sq`. One workaround is for dataset creators to add an additional root language tag to dataset YAML metadata, but it's unclear how to communicate this. It might be possible to index the search on `languagecode.split('-')[0]` but I wanted to float this issue before trying to write any code :) ## Expected results Datasets using longer bcp47 tags also appear under searches for just the language code; e.g. Quebecois datasets (`fr-CA`) would come up when looking for French datasets with no region specification (`fr`), or US English (`en-US`) datasets would come up when searching for English datasets (`en`). ## Actual results The language codes seem to be directly string matched, excluding datasets with specific language tags from non-specific searches. ## Environment info (web app)
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4304/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4304/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4298
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4298/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4298/comments
https://api.github.com/repos/huggingface/datasets/issues/4298/events
https://github.com/huggingface/datasets/issues/4298
1,229,748,006
I_kwDODunzps5JTHcm
4,298
Normalise license names
{ "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leondz", "id": 121934, "login": "leondz", "node_id": "MDQ6VXNlcjEyMTkzNA==", "organizations_url": "https://api.github.com/users/leondz/orgs", "received_events_url": "https://api.github.com/users/leondz/received_events", "repos_url": "https://api.github.com/users/leondz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "type": "User", "url": "https://api.github.com/users/leondz", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "we'll add the same server-side metadata validation system as for hf.co/models soon-ish\r\n\r\n(you can check on hf.co/models that licenses are \"clean\")", "Fixed by #4367." ]
2022-05-09T13:51:32Z
2022-05-20T09:51:50Z
2022-05-20T09:51:50Z
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** When browsing datasets, the Licenses tag cloud (bottom left of e.g. https://huggingface.co/datasets) has multiple variants of the same license. This means the options exclude datasets arbitrarily, giving users artificially low recall. The cause of the dupes is probably due to a bit of variation in metadata. **Describe the solution you'd like** I'd like the licenses in metadata to follow the same standard as much as possible, to remove this problem. I'd like to go ahead and normalise the dataset metadata to follow the format & values given in [src/datasets/utils/resources/licenses.json](https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/licenses.json) . **Describe alternatives you've considered** None **Additional context** None **Priority** Low
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4298/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4298/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4297
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4297/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4297/comments
https://api.github.com/repos/huggingface/datasets/issues/4297/events
https://github.com/huggingface/datasets/issues/4297
1,229,735,498
I_kwDODunzps5JTEZK
4,297
Datasets YAML tagging space is down
{ "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leondz", "id": 121934, "login": "leondz", "node_id": "MDQ6VXNlcjEyMTkzNA==", "organizations_url": "https://api.github.com/users/leondz/orgs", "received_events_url": "https://api.github.com/users/leondz/received_events", "repos_url": "https://api.github.com/users/leondz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "type": "User", "url": "https://api.github.com/users/leondz", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists...
null
[ "@lhoestq @albertvillanova `update-task-list` branch does not exist anymore, should point to `main` now i guess", "Thanks for reporting, fixing it now", "It's up again :)" ]
2022-05-09T13:45:05Z
2022-05-09T14:44:25Z
2022-05-09T14:44:25Z
CONTRIBUTOR
null
null
null
null
## Describe the bug The neat hf spaces app for generating YAML tags for dataset `README.md`s is down ## Steps to reproduce the bug 1. Visit https://huggingface.co/spaces/huggingface/datasets-tagging ## Expected results There'll be a HF spaces web app for generating dataset metadata YAML ## Actual results There's an error message; here's the step where it breaks: ``` Step 18/29 : RUN pip install -r requirements.txt ---> Running in e88bfe7e7e0c Defaulting to user installation because normal site-packages is not writeable Collecting git+https://github.com/huggingface/datasets.git@update-task-list (from -r requirements.txt (line 4)) Cloning https://github.com/huggingface/datasets.git (to revision update-task-list) to /tmp/pip-req-build-bm8t0r0k Running command git clone --filter=blob:none --quiet https://github.com/huggingface/datasets.git /tmp/pip-req-build-bm8t0r0k WARNING: Did not find branch or tag 'update-task-list', assuming revision or ref. Running command git checkout -q update-task-list error: pathspec 'update-task-list' did not match any file(s) known to git error: subprocess-exited-with-error × git checkout -q update-task-list did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × git checkout -q update-task-list did not run successfully. │ exit code: 1 ╰─> See above for output. ``` ## Environment info - Platform: Linux / Brave
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4297/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4297/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4291
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4291/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4291/comments
https://api.github.com/repos/huggingface/datasets/issues/4291/events
https://github.com/huggingface/datasets/issues/4291
1,227,777,500
I_kwDODunzps5JLmXc
4,291
Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message
{ "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leondz", "id": 121934, "login": "leondz", "node_id": "MDQ6VXNlcjEyMTkzNA==", "organizations_url": "https://api.github.com/users/leondz/orgs", "received_events_url": "https://api.github.com/users/leondz/received_events", "repos_url": "https://api.github.com/users/leondz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "type": "User", "url": "https://api.github.com/users/leondz", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Hi @leondz, thanks for reporting.\r\n\r\nIndeed, the dataset viewer relies on the dataset being streamable (passing `streaming=True` to `load_dataset`). Whereas most of the datastes are streamable out of the box (thanks to our implementation of streaming), there are still some exceptions.\r\n\r\nIn particular, in ...
2022-05-06T12:03:27Z
2022-05-09T08:25:58Z
2022-05-09T08:25:58Z
CONTRIBUTOR
null
null
null
null
### Link https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train ### Description The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss? ### Owner Yes
{ "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leondz", "id": 121934, "login": "leondz", "node_id": "MDQ6VXNlcjEyMTkzNA==", "organizations_url": "https://api.github.com/users/leondz/orgs", "received_events_url": "https://api.github.com/users/leondz/received_events", "repos_url": "https://api.github.com/users/leondz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "type": "User", "url": "https://api.github.com/users/leondz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4291/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4291/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4287
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4287/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4287/comments
https://api.github.com/repos/huggingface/datasets/issues/4287/events
https://github.com/huggingface/datasets/issues/4287
1,226,806,652
I_kwDODunzps5JH5V8
4,287
"NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "So I managed to solve this by adding a missing `import faiss` in the `@staticmethod` defined in https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/datasets/search.py#L305, triggered from https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src...
2022-05-05T15:09:45Z
2022-05-10T13:53:19Z
2022-05-10T13:53:19Z
MEMBER
null
null
null
null
## Describe the bug When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(..., device=0)` fails with that exception. All that assuming that `datasets` is properly installed and `faiss-gpu` too, as well as all the CUDA drivers required. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from transformers import DPRContextEncoder, DPRContextEncoderTokenizer import torch torch.set_grad_enabled(False) ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") from datasets import load_dataset ds = load_dataset('crime_and_punish', split='train[:100]') ds_with_embeddings = ds.map(lambda example: {'embeddings': ctx_encoder(**ctx_tokenizer(example["line"], return_tensors="pt"))[0][0].numpy()}) ds_with_embeddings.add_faiss_index(column='embeddings', device=0) # default `device=None` ``` ## Expected results A new column named `embeddings` in the dataset that we're adding the index to. ## Actual results An exception is triggered with the following message `NameError: name 'faiss' is not defined`. ## Environment info - `datasets` version: 2.1.0 - Platform: Linux-5.13.0-1022-azure-x86_64-with-glibc2.31 - Python version: 3.9.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4287/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4287/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4284
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4284/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4284/comments
https://api.github.com/repos/huggingface/datasets/issues/4284/events
https://github.com/huggingface/datasets/issues/4284
1,226,200,727
I_kwDODunzps5JFlaX
4,284
Issues in processing very large datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/10419055?v=4", "events_url": "https://api.github.com/users/sajastu/events{/privacy}", "followers_url": "https://api.github.com/users/sajastu/followers", "following_url": "https://api.github.com/users/sajastu/following{/other_user}", "gists_url": "https://api.github.com/users/sajastu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sajastu", "id": 10419055, "login": "sajastu", "node_id": "MDQ6VXNlcjEwNDE5MDU1", "organizations_url": "https://api.github.com/users/sajastu/orgs", "received_events_url": "https://api.github.com/users/sajastu/received_events", "repos_url": "https://api.github.com/users/sajastu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sajastu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sajastu/subscriptions", "type": "User", "url": "https://api.github.com/users/sajastu", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi ! `datasets` doesn't load the dataset in memory. Instead it uses memory mapping to load your dataset from your disk (it is stored as arrow files). Do you know at what point you have RAM issues exactly ?\r\n\r\nHow big are your graph_data_train dictionaries btw ?", "Closing this issue due to inactivity." ]
2022-05-05T05:01:09Z
2023-07-25T15:12:38Z
2023-07-25T15:12:38Z
NONE
null
null
null
null
## Describe the bug I'm trying to add a feature called "subgraph" to CNN/DM dataset (modifications on run_summarization.py of Huggingface Transformers script) --- I'm not quite sure if I'm doing it the right way, though--- but the main problem appears when the training starts where the error ` [OSError: [Errno 12] Cannot allocate memory]` appears. I suppose this problem roots in RAM issues and how the dataset is loaded during training, but I have no clue of what I can do to fix it. Observing the dataset's cache directory, I see that it takes ~600GB of memory and that's why I believe special care is needed when loading it into the memory. Here are my modifications to `run_summarization.py` code. ``` # loading pre-computed dictionary where keys are 'id' of article and values are corresponding subgraph graph_data_train = get_graph_data('train') graph_data_validation = get_graph_data('val') ... ... with training_args.main_process_first(desc="train dataset map pre-processing"): train_dataset = train_dataset.map( preprocess_function_train, batched=True, num_proc=data_args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not data_args.overwrite_cache, desc="Running tokenizer on train dataset", ) ``` And here is the modified preprocessed function: ``` def preprocess_function_train(examples): inputs, targets, sub_graphs, ids = [], [], [], [] for i in range(len(examples[text_column])): if examples[text_column][i] is not None and examples[summary_column][i] is not None: # if examples['doc_id'][i] in graph_data.keys(): inputs.append(examples[text_column][i]) targets.append(examples[summary_column][i]) sub_graphs.append(graph_data_train[examples['id'][i]]) ids.append(examples['id'][i]) inputs = [prefix + inp for inp in inputs] model_inputs = tokenizer(inputs, max_length=data_args.max_source_length, padding=padding, truncation=True, sub_graphs=sub_graphs, ids=ids) # Setup the tokenizer for targets with tokenizer.as_target_tokenizer(): labels = tokenizer(targets, max_length=max_target_length, padding=padding, truncation=True) # If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore # padding in the loss. if padding == "max_length" and data_args.ignore_pad_token_for_loss: labels["input_ids"] = [ [(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"] ] model_inputs["labels"] = labels["input_ids"] return model_inputs ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.1.0 - Platform: Linux Ubuntu - Python version: 3.6 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4284/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4284/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4276/comments
https://api.github.com/repos/huggingface/datasets/issues/4276/events
https://github.com/huggingface/datasets/issues/4276
1,224,949,252
I_kwDODunzps5JAz4E
4,276
OpenBookQA has missing and inconsistent field names
{ "avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4", "events_url": "https://api.github.com/users/vblagoje/events{/privacy}", "followers_url": "https://api.github.com/users/vblagoje/followers", "following_url": "https://api.github.com/users/vblagoje/following{/other_user}", "gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vblagoje", "id": 458335, "login": "vblagoje", "node_id": "MDQ6VXNlcjQ1ODMzNQ==", "organizations_url": "https://api.github.com/users/vblagoje/orgs", "received_events_url": "https://api.github.com/users/vblagoje/received_events", "repos_url": "https://api.github.com/users/vblagoje/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions", "type": "User", "url": "https://api.github.com/users/vblagoje", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Thanks for reporting, @vblagoje.\r\n\r\nIndeed, I noticed some of these issues while reviewing this PR:\r\n- #4259 \r\n\r\nThis is in my TODO list. ", "Ok, awesome @albertvillanova How about #4275 ?", "On the other hand, I am not sure if we should always preserve the original nested structure. I think we shoul...
2022-05-04T05:51:52Z
2022-10-11T17:11:53Z
2022-10-05T13:50:03Z
CONTRIBUTOR
null
null
null
null
## Describe the bug OpenBookQA implementation is inconsistent with the original dataset. We need to: 1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format. 2. Add missing additional fields: - 'fact1': row['fact1'], - 'humanScore': row['humanScore'], - 'clarity': row['clarity'], - 'turkIdAnonymized': row['turkIdAnonymized'] 3. Ensure the structure and every data item in the original OpenBookQA matches our OpenBookQA version. ## Expected results The structure and every data item in the original OpenBookQA matches our OpenBookQA version. ## Actual results TBD ## Environment info - `datasets` version: 2.1.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4276/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4276/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4275/comments
https://api.github.com/repos/huggingface/datasets/issues/4275/events
https://github.com/huggingface/datasets/issues/4275
1,224,943,414
I_kwDODunzps5JAyc2
4,275
CommonSenseQA has missing and inconsistent field names
{ "avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4", "events_url": "https://api.github.com/users/vblagoje/events{/privacy}", "followers_url": "https://api.github.com/users/vblagoje/followers", "following_url": "https://api.github.com/users/vblagoje/following{/other_user}", "gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vblagoje", "id": 458335, "login": "vblagoje", "node_id": "MDQ6VXNlcjQ1ODMzNQ==", "organizations_url": "https://api.github.com/users/vblagoje/orgs", "received_events_url": "https://api.github.com/users/vblagoje/received_events", "repos_url": "https://api.github.com/users/vblagoje/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions", "type": "User", "url": "https://api.github.com/users/vblagoje", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Thanks for reporting, @vblagoje.\r\n\r\nI'm opening a PR to address this. " ]
2022-05-04T05:38:59Z
2022-05-04T11:41:18Z
null
CONTRIBUTOR
null
null
null
null
## Describe the bug In short, CommonSenseQA implementation is inconsistent with the original dataset. More precisely, we need to: 1. Add the dataset matching "id" field. The current dataset, instead, regenerates monotonically increasing id. 2. The [“question”][“stem”] field is flattened into "question". We should match the original dataset and unflatten it 3. Add the missing "question_concept" field in the question tree node 4. Anything else? Go over the data structure of the newly repaired CommonSenseQA and make sure it matches the original ## Expected results Every data item of the CommonSenseQA should structurally and data-wise match the original CommonSenseQA dataset. ## Actual results TBD ## Environment info - `datasets` version: 2.1.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4275/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4275/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4271/comments
https://api.github.com/repos/huggingface/datasets/issues/4271/events
https://github.com/huggingface/datasets/issues/4271
1,224,404,403
I_kwDODunzps5I-u2z
4,271
A typo in docs of datasets.disable_progress_bar
{ "avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4", "events_url": "https://api.github.com/users/jiangwangyi/events{/privacy}", "followers_url": "https://api.github.com/users/jiangwangyi/followers", "following_url": "https://api.github.com/users/jiangwangyi/following{/other_user}", "gists_url": "https://api.github.com/users/jiangwangyi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jiangwangyi", "id": 39762734, "login": "jiangwangyi", "node_id": "MDQ6VXNlcjM5NzYyNzM0", "organizations_url": "https://api.github.com/users/jiangwangyi/orgs", "received_events_url": "https://api.github.com/users/jiangwangyi/received_events", "repos_url": "https://api.github.com/users/jiangwangyi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jiangwangyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiangwangyi/subscriptions", "type": "User", "url": "https://api.github.com/users/jiangwangyi", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gi...
null
[ "Hi! Thanks for catching and reporting the typo, a PR has been opened to fix it :)" ]
2022-05-03T17:44:56Z
2022-05-04T06:58:35Z
2022-05-04T06:58:35Z
NONE
null
null
null
null
## Describe the bug in the docs of V2.1.0 datasets.disable_progress_bar, we should replace "enable" with "disable".
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4271/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4271/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4268/comments
https://api.github.com/repos/huggingface/datasets/issues/4268/events
https://github.com/huggingface/datasets/issues/4268
1,223,331,964
I_kwDODunzps5I6pB8
4,268
error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
{ "avatar_url": "https://avatars.githubusercontent.com/u/102043285?v=4", "events_url": "https://api.github.com/users/i-am-neo/events{/privacy}", "followers_url": "https://api.github.com/users/i-am-neo/followers", "following_url": "https://api.github.com/users/i-am-neo/following{/other_user}", "gists_url": "https://api.github.com/users/i-am-neo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/i-am-neo", "id": 102043285, "login": "i-am-neo", "node_id": "U_kgDOBhUOlQ", "organizations_url": "https://api.github.com/users/i-am-neo/orgs", "received_events_url": "https://api.github.com/users/i-am-neo/received_events", "repos_url": "https://api.github.com/users/i-am-neo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/i-am-neo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/i-am-neo/subscriptions", "type": "User", "url": "https://api.github.com/users/i-am-neo", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[ "It would help a lot to be able to preview the dataset - I'd like to see if the pronunciations are in the dataset, eg. for [\"word\"](https://en.wiktionary.org/wiki/word),\r\n\r\nPronunciation\r\n([Received Pronunciation](https://en.wikipedia.org/wiki/Received_Pronunciation)) [IPA](https://en.wiktionary.org/wiki/Wi...
2022-05-02T20:34:25Z
2022-05-06T15:53:30Z
2022-05-03T11:23:48Z
NONE
null
null
null
null
## Describe the bug Error generated when attempting to download dataset ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered") ``` ## Expected results A clear and concise description of the expected results. ## Actual results ``` ExpectedMoreDownloadedFiles Traceback (most recent call last) [<ipython-input-62-4ac5cf959477>](https://localhost:8080/#) in <module>() 1 from datasets import load_dataset 2 ----> 3 dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered") 3 frames [/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name) 31 return 32 if len(set(expected_checksums) - set(recorded_checksums)) > 0: ---> 33 raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums))) 34 if len(set(recorded_checksums) - set(expected_checksums)) > 0: 35 raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums))) ExpectedMoreDownloadedFiles: {'/home/leandro/catalogue_data/datasets/lm_en_wiktionary_filtered/data/file-01.jsonl.gz', '/home/leandro/catalogue_data/datasets/lm_en_wiktionary_filtered/data/file-01.jsonl.gz.lock'} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4268/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4268/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4261
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4261/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4261/comments
https://api.github.com/repos/huggingface/datasets/issues/4261/events
https://github.com/huggingface/datasets/issues/4261
1,221,883,779
I_kwDODunzps5I1HeD
4,261
data leakage in `webis/conclugen` dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/54585776?v=4", "events_url": "https://api.github.com/users/xflashxx/events{/privacy}", "followers_url": "https://api.github.com/users/xflashxx/followers", "following_url": "https://api.github.com/users/xflashxx/following{/other_user}", "gists_url": "https://api.github.com/users/xflashxx/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xflashxx", "id": 54585776, "login": "xflashxx", "node_id": "MDQ6VXNlcjU0NTg1Nzc2", "organizations_url": "https://api.github.com/users/xflashxx/orgs", "received_events_url": "https://api.github.com/users/xflashxx/received_events", "repos_url": "https://api.github.com/users/xflashxx/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xflashxx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xflashxx/subscriptions", "type": "User", "url": "https://api.github.com/users/xflashxx", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Hi @xflashxx, thanks for reporting.\r\n\r\nPlease note that this dataset was generated and shared by Webis Group: https://huggingface.co/webis\r\n\r\nWe are contacting the dataset owners to inform them about the issue you found. We'll keep you updated of their reply.", "i'd suggest just pinging the authors here ...
2022-04-30T17:43:37Z
2022-05-03T06:04:26Z
2022-05-03T06:04:26Z
NONE
null
null
null
null
## Describe the bug Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results. Furthermore, all splits contain duplicate samples. ## Steps to reproduce the bug ```python from datasets import load_dataset training = load_dataset("webis/conclugen", "base", split="train") validation = load_dataset("webis/conclugen", "base", split="validation") testing = load_dataset("webis/conclugen", "base", split="test") # collect which sample id's are present in the training split ids_validation = list() ids_testing = list() for train_sample in training: train_argument = train_sample["argument"] train_conclusion = train_sample["conclusion"] train_id = train_sample["id"] # test if current sample is in validation split if train_argument in validation["argument"]: for validation_sample in validation: validation_argument = validation_sample["argument"] validation_conclusion = validation_sample["conclusion"] validation_id = validation_sample["id"] if train_argument == validation_argument and train_conclusion == validation_conclusion: ids_validation.append(validation_id) # test if current sample is in test split if train_argument in testing["argument"]: for testing_sample in testing: testing_argument = testing_sample["argument"] testing_conclusion = testing_sample["conclusion"] testing_id = testing_sample["id"] if train_argument == testing_argument and train_conclusion == testing_conclusion: ids_testing.append(testing_id) ``` ## Expected results Length of both lists `ids_validation` and `ids_testing` should be zero. ## Actual results Length of `ids_validation` = `2556` Length of `ids_testing` = `287` Furthermore, there seems to be duplicate samples in (at least) the *training* split, since: `print(len(set(ids_validation)))` = `950` `print(len(set(ids_testing)))` = `101` All in all, around 7% of the samples of each the *validation* and *test* split seems to be present in the *training* split. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.4 - Platform: macOS-12.3.1-arm64-arm-64bit - Python version: 3.9.10 - PyArrow version: 7.0.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4261/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4261/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4248
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4248/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4248/comments
https://api.github.com/repos/huggingface/datasets/issues/4248/events
https://github.com/huggingface/datasets/issues/4248
1,218,460,444
I_kwDODunzps5IoDsc
4,248
conll2003 dataset loads original data.
{ "avatar_url": "https://avatars.githubusercontent.com/u/26458611?v=4", "events_url": "https://api.github.com/users/sue991/events{/privacy}", "followers_url": "https://api.github.com/users/sue991/followers", "following_url": "https://api.github.com/users/sue991/following{/other_user}", "gists_url": "https://api.github.com/users/sue991/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sue991", "id": 26458611, "login": "sue991", "node_id": "MDQ6VXNlcjI2NDU4NjEx", "organizations_url": "https://api.github.com/users/sue991/orgs", "received_events_url": "https://api.github.com/users/sue991/received_events", "repos_url": "https://api.github.com/users/sue991/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sue991/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sue991/subscriptions", "type": "User", "url": "https://api.github.com/users/sue991", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Thanks for reporting @sue99.\r\n\r\nUnfortunately. I'm not able to reproduce your problem:\r\n```python\r\nIn [1]: import datasets\r\n ...: from datasets import load_dataset\r\n ...: dataset = load_dataset(\"conll2003\")\r\n\r\nIn [2]: dataset\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n fea...
2022-04-28T09:33:31Z
2022-07-18T07:15:48Z
2022-07-18T07:15:48Z
NONE
null
null
null
null
## Describe the bug I load `conll2003` dataset to use refined data like [this](https://huggingface.co/datasets/conll2003/viewer/conll2003/train) preview, but it is original data that contains `'-DOCSTART- -X- -X- O'` text. Is this a bug or should I use another dataset_name like `lhoestq/conll2003` ? ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset dataset = load_dataset("conll2003") ``` ## Expected results { "chunk_tags": [11, 12, 12, 21, 13, 11, 11, 21, 13, 11, 12, 13, 11, 21, 22, 11, 12, 17, 11, 21, 17, 11, 12, 12, 21, 22, 22, 13, 11, 0], "id": "0", "ner_tags": [0, 3, 4, 0, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 7, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], "pos_tags": [12, 22, 22, 38, 15, 22, 28, 38, 15, 16, 21, 35, 24, 35, 37, 16, 21, 15, 24, 41, 15, 16, 21, 21, 20, 37, 40, 35, 21, 7], "tokens": ["The", "European", "Commission", "said", "on", "Thursday", "it", "disagreed", "with", "German", "advice", "to", "consumers", "to", "shun", "British", "lamb", "until", "scientists", "determine", "whether", "mad", "cow", "disease", "can", "be", "transmitted", "to", "sheep", "."] } ## Actual results ```python print(dataset) DatasetDict({ train: Dataset({ features: ['text'], num_rows: 219554 }) test: Dataset({ features: ['text'], num_rows: 50350 }) validation: Dataset({ features: ['text'], num_rows: 55044 }) }) ``` ```python for i in range(20): print(dataset['train'][i]) {'text': '-DOCSTART- -X- -X- O'} {'text': ''} {'text': 'EU NNP B-NP B-ORG'} {'text': 'rejects VBZ B-VP O'} {'text': 'German JJ B-NP B-MISC'} {'text': 'call NN I-NP O'} {'text': 'to TO B-VP O'} {'text': 'boycott VB I-VP O'} {'text': 'British JJ B-NP B-MISC'} {'text': 'lamb NN I-NP O'} {'text': '. . O O'} {'text': ''} {'text': 'Peter NNP B-NP B-PER'} {'text': 'Blackburn NNP I-NP I-PER'} {'text': ''} {'text': 'BRUSSELS NNP B-NP B-LOC'} {'text': '1996-08-22 CD I-NP O'} {'text': ''} {'text': 'The DT B-NP O'} {'text': 'European NNP I-NP B-ORG'} ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4248/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4248/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4247
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4247/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4247/comments
https://api.github.com/repos/huggingface/datasets/issues/4247/events
https://github.com/huggingface/datasets/issues/4247
1,218,320,882
I_kwDODunzps5Inhny
4,247
The data preview of XGLUE
{ "avatar_url": "https://avatars.githubusercontent.com/u/49108847?v=4", "events_url": "https://api.github.com/users/czq1999/events{/privacy}", "followers_url": "https://api.github.com/users/czq1999/followers", "following_url": "https://api.github.com/users/czq1999/following{/other_user}", "gists_url": "https://api.github.com/users/czq1999/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/czq1999", "id": 49108847, "login": "czq1999", "node_id": "MDQ6VXNlcjQ5MTA4ODQ3", "organizations_url": "https://api.github.com/users/czq1999/orgs", "received_events_url": "https://api.github.com/users/czq1999/received_events", "repos_url": "https://api.github.com/users/czq1999/repos", "site_admin": false, "starred_url": "https://api.github.com/users/czq1999/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/czq1999/subscriptions", "type": "User", "url": "https://api.github.com/users/czq1999", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "![image](https://user-images.githubusercontent.com/49108847/165700611-915b4343-766f-4b81-bdaa-b31950250f06.png)\r\n", "Thanks for reporting @czq1999.\r\n\r\nNote that the dataset viewer uses the dataset in streaming mode and that not all datasets support streaming yet.\r\n\r\nThat is the case for XGLUE dataset (...
2022-04-28T07:30:50Z
2022-04-29T08:23:28Z
2022-04-28T16:08:03Z
NONE
null
null
null
null
It seems that something wrong with the data previvew of XGLUE
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4247/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4247/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4241
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4241/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4241/comments
https://api.github.com/repos/huggingface/datasets/issues/4241/events
https://github.com/huggingface/datasets/issues/4241
1,217,423,686
I_kwDODunzps5IkGlG
4,241
NonMatchingChecksumError when attempting to download GLUE
{ "avatar_url": "https://avatars.githubusercontent.com/u/9650729?v=4", "events_url": "https://api.github.com/users/drussellmrichie/events{/privacy}", "followers_url": "https://api.github.com/users/drussellmrichie/followers", "following_url": "https://api.github.com/users/drussellmrichie/following{/other_user}", "gists_url": "https://api.github.com/users/drussellmrichie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/drussellmrichie", "id": 9650729, "login": "drussellmrichie", "node_id": "MDQ6VXNlcjk2NTA3Mjk=", "organizations_url": "https://api.github.com/users/drussellmrichie/orgs", "received_events_url": "https://api.github.com/users/drussellmrichie/received_events", "repos_url": "https://api.github.com/users/drussellmrichie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/drussellmrichie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/drussellmrichie/subscriptions", "type": "User", "url": "https://api.github.com/users/drussellmrichie", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi :)\r\n\r\nI think your issue may be related to the older `nlp` library. I was able to download `glue` with the latest version of `datasets`. Can you try updating with:\r\n\r\n```py\r\npip install -U datasets\r\n```\r\n\r\nThen you can download:\r\n\r\n```py\r\nfrom datasets import load_dataset\r\nds = load_data...
2022-04-27T14:14:21Z
2022-04-28T07:45:27Z
2022-04-28T07:45:27Z
NONE
null
null
null
null
## Describe the bug I am trying to download the GLUE dataset from the NLP module but get an error (see below). ## Steps to reproduce the bug ```python import nlp nlp.__version__ # '0.2.0' nlp.load_dataset('glue', name="rte", download_mode="force_redownload") ``` ## Expected results I expect the dataset to download without an error. ## Actual results ``` INFO:nlp.load:Checking /home/richier/.cache/huggingface/datasets/5fe6ab0df8a32a3371b2e6a969d31d855a19563724fb0d0f163748c270c0ac60.2ea96febf19981fae5f13f0a43d4e2aa58bc619bc23acf06de66675f425a5538.py for additional imports. INFO:nlp.load:Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py at /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue INFO:nlp.load:Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py at /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4 INFO:nlp.load:Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py to /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4/glue.py INFO:nlp.load:Found dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/dataset_infos.json to /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4/dataset_infos.json INFO:nlp.load:Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/glue/glue.py at /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4/glue.json INFO:nlp.info:Loading Dataset Infos from /home/richier/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/datasets/glue/637080968c182118f006d3ea39dd9937940e81cfffc8d79836eaae8bba307fc4 INFO:nlp.builder:Generating dataset glue (/home/richier/.cache/huggingface/datasets/glue/rte/1.0.0) INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source INFO:nlp.utils.file_utils:Couldn't get ETag version for url https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb INFO:nlp.utils.file_utils:https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb not found in cache or force_download set to True, downloading to /home/richier/.cache/huggingface/datasets/downloads/tmpldt3n805 Downloading and preparing dataset glue/rte (download: 680.81 KiB, generated: 1.83 MiB, total: 2.49 MiB) to /home/richier/.cache/huggingface/datasets/glue/rte/1.0.0... Downloading: 100%|██████████| 73.0/73.0 [00:00<00:00, 73.9kB/s] INFO:nlp.utils.file_utils:storing https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb in cache at /home/richier/.cache/huggingface/datasets/downloads/e8b62ee44e6f8b6aea761935928579ffe1aa55d161808c482e0725abbdcf9c64 INFO:nlp.utils.file_utils:creating metadata file for /home/richier/.cache/huggingface/datasets/downloads/e8b62ee44e6f8b6aea761935928579ffe1aa55d161808c482e0725abbdcf9c64 --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-7-669a8343dcc1> in <module> ----> 1 nlp.load_dataset('glue', name="rte", download_mode="force_redownload") ~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 518 download_mode=download_mode, 519 ignore_verifications=ignore_verifications, --> 520 save_infos=save_infos, 521 ) 522 ~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 418 verify_infos = not save_infos and not ignore_verifications 419 self._download_and_prepare( --> 420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 421 ) 422 # Sync info ~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 458 # Checksums verification 459 if verify_infos: --> 460 verify_checksums(self.info.download_checksums, dl_manager.get_recorded_sizes_checksums()) 461 for split_generator in split_generators: 462 if str(split_generator.split_info.name).lower() == "all": ~/anaconda3/envs/py36_bert_ee_torch1_11/lib/python3.6/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums) 34 bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]] 35 if len(bad_urls) > 0: ---> 36 raise NonMatchingChecksumError(str(bad_urls)) 37 logger.info("All the checksums matched successfully.") 38 NonMatchingChecksumError: ['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FRTE.zip?alt=media&token=5efa7e85-a0bb-4f19-8ea2-9e1840f077fb'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux-4.18.0-348.20.1.el8_5.x86_64-x86_64-with-redhat-8.5-Ootpa - Python version: 3.6.13 - PyArrow version: 6.0.1 - Pandas version: 1.1.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4241/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4241/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4238
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4238/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4238/comments
https://api.github.com/repos/huggingface/datasets/issues/4238/events
https://github.com/huggingface/datasets/issues/4238
1,217,168,123
I_kwDODunzps5IjIL7
4,238
Dataset caching policy
{ "avatar_url": "https://avatars.githubusercontent.com/u/163333?v=4", "events_url": "https://api.github.com/users/loretoparisi/events{/privacy}", "followers_url": "https://api.github.com/users/loretoparisi/followers", "following_url": "https://api.github.com/users/loretoparisi/following{/other_user}", "gists_url": "https://api.github.com/users/loretoparisi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/loretoparisi", "id": 163333, "login": "loretoparisi", "node_id": "MDQ6VXNlcjE2MzMzMw==", "organizations_url": "https://api.github.com/users/loretoparisi/orgs", "received_events_url": "https://api.github.com/users/loretoparisi/received_events", "repos_url": "https://api.github.com/users/loretoparisi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/loretoparisi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loretoparisi/subscriptions", "type": "User", "url": "https://api.github.com/users/loretoparisi", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi @loretoparisi, thanks for reporting.\r\n\r\nThere is an option to force the redownload of the data files (and thus not using previously download and cached data files): `load_dataset(..., download_mode=\"force_redownload\")`.\r\n\r\nPlease, let me know if this fixes your problem.\r\n\r\nI can confirm you that y...
2022-04-27T10:42:11Z
2022-04-27T16:29:25Z
2022-04-27T16:28:50Z
NONE
null
null
null
null
## Describe the bug I cannot clean cache of my datasets files, despite I have updated the `csv` files on the repository [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences). The original file had a line with bad characters, causing the following error ``` [/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values) 852 if value not in self._str2int: 853 value = str(value).strip() --> 854 output.append(self._str2int[str(value)]) 855 else: 856 # No names provided, try to integerize KeyError: '\\N' ``` The file now is cleanup up, but I still get the error. This happens even if I inspect the local cached contents, and cleanup the files locally: ```python from datasets import load_dataset_builder dataset_builder = load_dataset_builder("loretoparisi/tatoeba-sentences") print(dataset_builder.cache_dir) print(dataset_builder.info.features) print(dataset_builder.info.splits) ``` ``` Using custom data configuration loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-e59b8ad92f1bb8dd/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519 None None ``` and removing files located at `/root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-*`. Is there any remote file caching policy in place? If so, is it possibile to programmatically disable it? Currently it seems that the file `test.csv` on the repo [here](https://huggingface.co/datasets/loretoparisi/tatoeba-sentences/blob/main/test.csv) is cached remotely. In fact I download locally the file from raw link, the file is up-to-date; but If I use it within `datasets` as shown above, it gives to me always the first revision of the file, not the last. Thank you. ## Steps to reproduce the bug ```python from datasets import load_dataset,Features,Value,ClassLabel class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"] features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')}) num_labels = features['label'].num_classes data_files = { "train": "train.csv", "test": "test.csv" } sentences = load_dataset( "loretoparisi/tatoeba-sentences", data_files=data_files, delimiter='\t', column_names=['label', 'text'], ) # You can make this part faster with num_proc=<some int> sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features) sentences = sentences.shuffle() ``` ## Expected results Properly tokenize dataset file `test.csv` without issues. ## Actual results Specify the actual results or traceback. ``` Downloading data files: 100% 2/2 [00:16<00:00, 7.34s/it] Downloading data: 100% 391M/391M [00:12<00:00, 36.6MB/s] Downloading data: 100% 92.4M/92.4M [00:02<00:00, 40.0MB/s] Extracting data files: 100% 2/2 [00:00<00:00, 47.66it/s] Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-efeff8965c730a2c/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data. 100% 2/2 [00:00<00:00, 25.94it/s] 11% 942339/8256449 [01:55<13:11, 9245.85ex/s] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [<ipython-input-3-6a9867fad8d6>](https://localhost:8080/#) in <module>() 12 ) 13 # You can make this part faster with num_proc=<some int> ---> 14 sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features) 15 sentences = sentences.shuffle() 10 frames [/usr/local/lib/python3.7/dist-packages/datasets/features/features.py](https://localhost:8080/#) in str2int(self, values) 852 if value not in self._str2int: 853 value = str(value).strip() --> 854 output.append(self._str2int[str(value)]) 855 else: 856 # No names provided, try to integerize KeyError: '\\N' ``` ## Environment info ``` - `datasets` version: 2.1.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5 - ``` ``` - `transformers` version: 4.18.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> - ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/163333?v=4", "events_url": "https://api.github.com/users/loretoparisi/events{/privacy}", "followers_url": "https://api.github.com/users/loretoparisi/followers", "following_url": "https://api.github.com/users/loretoparisi/following{/other_user}", "gists_url": "https://api.github.com/users/loretoparisi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/loretoparisi", "id": 163333, "login": "loretoparisi", "node_id": "MDQ6VXNlcjE2MzMzMw==", "organizations_url": "https://api.github.com/users/loretoparisi/orgs", "received_events_url": "https://api.github.com/users/loretoparisi/received_events", "repos_url": "https://api.github.com/users/loretoparisi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/loretoparisi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loretoparisi/subscriptions", "type": "User", "url": "https://api.github.com/users/loretoparisi", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4238/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4238/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4237
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4237/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4237/comments
https://api.github.com/repos/huggingface/datasets/issues/4237/events
https://github.com/huggingface/datasets/issues/4237
1,217,121,044
I_kwDODunzps5Ii8sU
4,237
Common Voice 8 doesn't show datasets viewer
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten", "user_view_type": "public" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
null
[]
null
[ "Thanks for reporting. I understand it's an error in the dataset script. To reproduce:\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> split_names = ds.get_dataset_split_names(\"mozilla-foundation/common_voice_8_0\", use_auth_token=\"**********\")\r\nDownloading builder script: 100%|███████████████████████████...
2022-04-27T10:05:20Z
2022-05-10T12:17:05Z
2022-05-10T12:17:04Z
CONTRIBUTOR
null
null
null
null
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4237/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4237/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4235
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4235/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4235/comments
https://api.github.com/repos/huggingface/datasets/issues/4235/events
https://github.com/huggingface/datasets/issues/4235
1,216,952,640
I_kwDODunzps5IiTlA
4,235
How to load VERY LARGE dataset?
{ "avatar_url": "https://avatars.githubusercontent.com/u/45160643?v=4", "events_url": "https://api.github.com/users/CaoYiqingT/events{/privacy}", "followers_url": "https://api.github.com/users/CaoYiqingT/followers", "following_url": "https://api.github.com/users/CaoYiqingT/following{/other_user}", "gists_url": "https://api.github.com/users/CaoYiqingT/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/CaoYiqingT", "id": 45160643, "login": "CaoYiqingT", "node_id": "MDQ6VXNlcjQ1MTYwNjQz", "organizations_url": "https://api.github.com/users/CaoYiqingT/orgs", "received_events_url": "https://api.github.com/users/CaoYiqingT/received_events", "repos_url": "https://api.github.com/users/CaoYiqingT/repos", "site_admin": false, "starred_url": "https://api.github.com/users/CaoYiqingT/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CaoYiqingT/subscriptions", "type": "User", "url": "https://api.github.com/users/CaoYiqingT", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "The `Trainer` support `IterableDataset`, not just datasets." ]
2022-04-27T07:50:13Z
2023-07-25T15:07:57Z
2023-07-25T15:07:57Z
NONE
null
null
null
null
### System Info ```shell I am using transformer trainer while meeting the issue. The trainer requests torch.utils.data.Dataset as input, which loads the whole dataset into the memory at once. Therefore, when the dataset is too large to load, there's nothing I can do except using IterDataset, which loads samples of data seperately, and results in low efficiency. I wonder if there are any tricks like Sharding in huggingface trainer. Looking forward to your reply. ``` ### Who can help? Trainer: @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction None ### Expected behavior ```shell I wonder if there are any tricks like fairseq Sharding very large datasets https://fairseq.readthedocs.io/en/latest/getting_started.html. Thanks a lot! ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4235/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4235/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4230
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4230/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4230/comments
https://api.github.com/repos/huggingface/datasets/issues/4230/events
https://github.com/huggingface/datasets/issues/4230
1,216,643,661
I_kwDODunzps5IhIJN
4,230
Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data?
{ "avatar_url": "https://avatars.githubusercontent.com/u/37113676?v=4", "events_url": "https://api.github.com/users/beyondguo/events{/privacy}", "followers_url": "https://api.github.com/users/beyondguo/followers", "following_url": "https://api.github.com/users/beyondguo/following{/other_user}", "gists_url": "https://api.github.com/users/beyondguo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/beyondguo", "id": 37113676, "login": "beyondguo", "node_id": "MDQ6VXNlcjM3MTEzNjc2", "organizations_url": "https://api.github.com/users/beyondguo/orgs", "received_events_url": "https://api.github.com/users/beyondguo/received_events", "repos_url": "https://api.github.com/users/beyondguo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/beyondguo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/beyondguo/subscriptions", "type": "User", "url": "https://api.github.com/users/beyondguo", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Thanks for reporting @beyondguo.\r\n\r\nIndeed, we generate this dataset from this raw data file URL: https://data.deepai.org/conll2003.zip\r\nAnd that URL only contains the English version.", "The German data requires payment\r\n\r\nThe [original task page](https://www.clips.uantwerpen.be/conll2003/ner/) states...
2022-04-27T00:53:52Z
2023-07-25T15:10:15Z
2023-07-25T15:10:15Z
NONE
null
null
null
null
![image](https://user-images.githubusercontent.com/37113676/165416606-96b5db18-b16c-4b6b-928c-de8620fd943e.png) But on huggingface datasets: ![image](https://user-images.githubusercontent.com/37113676/165416649-8fd77980-ca0d-43f0-935e-f398ba8323a4.png) Where is the German data?
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4230/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4230/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4221
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4221/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4221/comments
https://api.github.com/repos/huggingface/datasets/issues/4221/events
https://github.com/huggingface/datasets/issues/4221
1,215,911,182
I_kwDODunzps5IeVUO
4,221
Dictionary Feature
{ "avatar_url": "https://avatars.githubusercontent.com/u/2944532?v=4", "events_url": "https://api.github.com/users/jordiae/events{/privacy}", "followers_url": "https://api.github.com/users/jordiae/followers", "following_url": "https://api.github.com/users/jordiae/following{/other_user}", "gists_url": "https://api.github.com/users/jordiae/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jordiae", "id": 2944532, "login": "jordiae", "node_id": "MDQ6VXNlcjI5NDQ1MzI=", "organizations_url": "https://api.github.com/users/jordiae/orgs", "received_events_url": "https://api.github.com/users/jordiae/received_events", "repos_url": "https://api.github.com/users/jordiae/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jordiae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jordiae/subscriptions", "type": "User", "url": "https://api.github.com/users/jordiae", "user_view_type": "public" }
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Hi @jordiae,\r\n\r\nInstead of the `Sequence` feature, you can use just a regular list: put the dict between `[` and `]`:\r\n```python\r\n\"list_of_dict_feature\": [\r\n {\r\n \"key1_in_dict\": datasets.Value(\"string\"),\r\n \"key2_in_dict\": datasets.Value(\"int32\"),\r\n ...\r\n }\r\n...
2022-04-26T12:50:18Z
2022-04-29T14:52:19Z
2022-04-28T17:04:58Z
NONE
null
null
null
null
Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something? Thank you in advance.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4221/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4221/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4217
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4217/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4217/comments
https://api.github.com/repos/huggingface/datasets/issues/4217/events
https://github.com/huggingface/datasets/issues/4217
1,214,688,141
I_kwDODunzps5IZquN
4,217
Big_Patent dataset broken
{ "avatar_url": "https://avatars.githubusercontent.com/u/54189843?v=4", "events_url": "https://api.github.com/users/Matthew-Larsen/events{/privacy}", "followers_url": "https://api.github.com/users/Matthew-Larsen/followers", "following_url": "https://api.github.com/users/Matthew-Larsen/following{/other_user}", "gists_url": "https://api.github.com/users/Matthew-Larsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Matthew-Larsen", "id": 54189843, "login": "Matthew-Larsen", "node_id": "MDQ6VXNlcjU0MTg5ODQz", "organizations_url": "https://api.github.com/users/Matthew-Larsen/orgs", "received_events_url": "https://api.github.com/users/Matthew-Larsen/received_events", "repos_url": "https://api.github.com/users/Matthew-Larsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Matthew-Larsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Matthew-Larsen/subscriptions", "type": "User", "url": "https://api.github.com/users/Matthew-Larsen", "user_view_type": "public" }
[ { "color": "8B51EF", "default": false, "description": "", "id": 4069435429, "name": "hosted-on-google-drive", "node_id": "LA_kwDODunzps7yjqgl", "url": "https://api.github.com/repos/huggingface/datasets/labels/hosted-on-google-drive" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Thanks for reporting. The issue seems not to be directly related to the dataset viewer or the `datasets` library, but instead to it being hosted on Google Drive.\r\n\r\nSee related issues: https://github.com/huggingface/datasets/issues?q=is%3Aissue+is%3Aopen+drive.google.com\r\n\r\nTo quote [@lhoestq](https://gith...
2022-04-25T15:31:45Z
2022-05-26T06:29:43Z
2022-05-02T18:21:15Z
NONE
null
null
null
null
## Dataset viewer issue for '*big_patent*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/big_patent/viewer/all/train)* *Unable to view because it says FileNotFound, also cannot download it through the python API* Am I the one who added this dataset ? No
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4217/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4217/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4211
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4211/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4211/comments
https://api.github.com/repos/huggingface/datasets/issues/4211/events
https://github.com/huggingface/datasets/issues/4211
1,214,361,837
I_kwDODunzps5IYbDt
4,211
DatasetDict containing Datasets with different features when pushed to hub gets remapped features
{ "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pietrolesci", "id": 61748653, "login": "pietrolesci", "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "repos_url": "https://api.github.com/users/pietrolesci/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "type": "User", "url": "https://api.github.com/users/pietrolesci", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", ...
null
[ "Hi @pietrolesci, thanks for reporting.\r\n\r\nPlease note that this is a design purpose: a `DatasetDict` has the same features for all its datasets. Normally, a `DatasetDict` is composed of several sub-datasets each corresponding to a different **split**.\r\n\r\nTo handle sub-datasets with different features, we u...
2022-04-25T11:22:54Z
2023-04-06T19:25:50Z
2022-05-20T15:15:30Z
NONE
null
null
null
null
Hi there, I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same. Dataset and code to reproduce available [here](https://huggingface.co/datasets/pietrolesci/robust_nli). In short: I have 3 feature mapping ```python Tri_features = Features( { "idx": Value(dtype="int64"), "premise": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]), } ) Ent_features = Features( { "idx": Value(dtype="int64"), "premise": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=2, names=["non-entailment", "entailment"]), } ) Con_features = Features( { "idx": Value(dtype="int64"), "premise": Value(dtype="string"), "hypothesis": Value(dtype="string"), "label": ClassLabel(num_classes=2, names=["non-contradiction", "contradiction"]), } ) ``` Then I create different datasets ```python dataset_splits = {} for split in df["split"].unique(): print(split) df_split = df.loc[df["split"] == split].copy() if split in Tri_dataset: df_split["label"] = df_split["label"].map({"entailment": 0, "neutral": 1, "contradiction": 2}) ds = Dataset.from_pandas(df_split, features=Tri_features) elif split in Ent_bin_dataset: df_split["label"] = df_split["label"].map({"non-entailment": 0, "entailment": 1}) ds = Dataset.from_pandas(df_split, features=Ent_features) elif split in Con_bin_dataset: df_split["label"] = df_split["label"].map({"non-contradiction": 0, "contradiction": 1}) ds = Dataset.from_pandas(df_split, features=Con_features) else: print("ERROR:", split) dataset_splits[split] = ds datasets = DatasetDict(dataset_splits) ``` I then push to hub ```python datasets.push_to_hub("pietrolesci/robust_nli", token="<token>") ``` Finally, I load it from the hub ```python datasets_loaded_from_hub = load_dataset("pietrolesci/robust_nli") ``` And I get that ```python datasets["LI_TS"].features != datasets_loaded_from_hub["LI_TS"].features ``` since ```python "label": ClassLabel(num_classes=2, names=["non-contradiction", "contradiction"]) ``` gets remapped to ```python "label": ClassLabel(num_classes=3, names=["entailment", "neutral", "contradiction"]) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4211/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4211/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4210
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4210/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4210/comments
https://api.github.com/repos/huggingface/datasets/issues/4210/events
https://github.com/huggingface/datasets/issues/4210
1,214,089,130
I_kwDODunzps5IXYeq
4,210
TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
{ "avatar_url": "https://avatars.githubusercontent.com/u/163333?v=4", "events_url": "https://api.github.com/users/loretoparisi/events{/privacy}", "followers_url": "https://api.github.com/users/loretoparisi/followers", "following_url": "https://api.github.com/users/loretoparisi/following{/other_user}", "gists_url": "https://api.github.com/users/loretoparisi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/loretoparisi", "id": 163333, "login": "loretoparisi", "node_id": "MDQ6VXNlcjE2MzMzMw==", "organizations_url": "https://api.github.com/users/loretoparisi/orgs", "received_events_url": "https://api.github.com/users/loretoparisi/received_events", "repos_url": "https://api.github.com/users/loretoparisi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/loretoparisi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loretoparisi/subscriptions", "type": "User", "url": "https://api.github.com/users/loretoparisi", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi! Casting class labels from strings is currently not supported in the CSV loader, but you can get the same result with an additional map as follows:\r\n```python\r\nfrom datasets import load_dataset,Features,Value,ClassLabel\r\nclass_names = [\"cmn\",\"deu\",\"rus\",\"fra\",\"eng\",\"jpn\",\"spa\",\"ita\",\"kor\...
2022-04-25T07:28:42Z
2022-05-31T12:16:31Z
2022-05-31T12:16:31Z
NONE
null
null
null
null
### System Info ```shell - `transformers` version: 4.18.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - Huggingface_hub version: 0.5.1 - PyTorch version (GPU?): 1.10.0+cu111 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @LysandreJik ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from datasets import load_dataset,Features,Value,ClassLabel class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tur","heb","hun","ell","ind","ara","arz","fin","bul","yue","swe","ukr","bel","que","ces","swh","nno","wuu","nob","zsm","est","kat","pol","lat","urd","sqi","isl","fry","afr","ron","fao","san","bre","tat","yid","uig","uzb","srp","qya","dan","pes","slk","eus","cycl","acm","tgl","lvs","kaz","hye","hin","lit","ben","cat","bos","hrv","tha","orv","cha","mon","lzh","scn","gle","mkd","slv","frm","glg","vol","ain","jbo","tok","ina","nds","mal","tlh","roh","ltz","oss","ido","gla","mlt","sco","ast","jav","oci","ile","ota","xal","tel","sjn","nov","khm","tpi","ang","aze","tgk","tuk","chv","hsb","dsb","bod","sme","cym","mri","ksh","kmr","ewe","kab","ber","tpw","udm","lld","pms","lad","grn","mlg","xho","pnb","grc","hat","lao","npi","cor","nah","avk","mar","guj","pan","kir","myv","prg","sux","crs","ckt","bak","zlm","hil","cbk","chr","nav","lkt","enm","arq","lin","abk","pcd","rom","gsw","tam","zul","awa","wln","amh","bar","hbo","mhr","bho","mrj","ckb","osx","pfl","mgm","sna","mah","hau","kan","nog","sin","glv","dng","kal","liv","vro","apc","jdt","fur","che","haw","yor","crh","pdc","ppl","kin","shs","mnw","tet","sah","kum","ngt","nya","pus","hif","mya","moh","wol","tir","ton","lzz","oar","lug","brx","non","mww","hak","nlv","ngu","bua","aym","vec","ibo","tkl","bam","kha","ceb","lou","fuc","smo","gag","lfn","arg","umb","tyv","kjh","oji","cyo","urh","kzj","pam","srd","lmo","swg","mdf","gil","snd","tso","sot","zza","tsn","pau","som","egl","ady","asm","ori","dtp","cho","max","kam","niu","sag","ilo","kaa","fuv","nch","hoc","iba","gbm","sun","war","mvv","pap","ary","kxi","csb","pag","cos","rif","kek","krc","aii","ban","ssw","tvl","mfe","tah","bvy","bcl","hnj","nau","nst","afb","quc","min","tmw","mad","bjn","mai","cjy","got","hsn","gan","tzl","dws","ldn","afh","sgs","krl","vep","rue","tly","mic","ext","izh","sma","jam","cmo","mwl","kpv","koi","bis","ike","run","evn","ryu","mnc","aoz","otk","kas","aln","akl","yua","shy","fkv","gos","fij","thv","zgh","gcf","cay","xmf","tig","div","lij","rap","hrx","cpi","tts","gaa","tmr","iii","ltg","bzt","syc","emx","gom","chg","osp","stq","frr","fro","nys","toi","new","phn","jpa","rel","drt","chn","pli","laa","bal","hdn","hax","mik","ajp","xqa","pal","crk","mni","lut","ayl","ood","sdh","ofs","nus","kiu","diq","qxq","alt","bfz","klj","mus","srn","guc","lim","zea","shi","mnr","bom","sat","szl"] features = Features({ 'label': ClassLabel(names=class_names), 'text': Value('string')}) num_labels = features['label'].num_classes data_files = { "train": "train.csv", "test": "test.csv" } sentences = load_dataset("loretoparisi/tatoeba-sentences", data_files=data_files, delimiter='\t', column_names=['label', 'text'], features = features ``` ERROR: ``` ClassLabel(num_classes=403, names=['cmn', 'deu', 'rus', 'fra', 'eng', 'jpn', 'spa', 'ita', 'kor', 'vie', 'nld', 'epo', 'por', 'tur', 'heb', 'hun', 'ell', 'ind', 'ara', 'arz', 'fin', 'bul', 'yue', 'swe', 'ukr', 'bel', 'que', 'ces', 'swh', 'nno', 'wuu', 'nob', 'zsm', 'est', 'kat', 'pol', 'lat', 'urd', 'sqi', 'isl', 'fry', 'afr', 'ron', 'fao', 'san', 'bre', 'tat', 'yid', 'uig', 'uzb', 'srp', 'qya', 'dan', 'pes', 'slk', 'eus', 'cycl', 'acm', 'tgl', 'lvs', 'kaz', 'hye', 'hin', 'lit', 'ben', 'cat', 'bos', 'hrv', 'tha', 'orv', 'cha', 'mon', 'lzh', 'scn', 'gle', 'mkd', 'slv', 'frm', 'glg', 'vol', 'ain', 'jbo', 'tok', 'ina', 'nds', 'mal', 'tlh', 'roh', 'ltz', 'oss', 'ido', 'gla', 'mlt', 'sco', 'ast', 'jav', 'oci', 'ile', 'ota', 'xal', 'tel', 'sjn', 'nov', 'khm', 'tpi', 'ang', 'aze', 'tgk', 'tuk', 'chv', 'hsb', 'dsb', 'bod', 'sme', 'cym', 'mri', 'ksh', 'kmr', 'ewe', 'kab', 'ber', 'tpw', 'udm', 'lld', 'pms', 'lad', 'grn', 'mlg', 'xho', 'pnb', 'grc', 'hat', 'lao', 'npi', 'cor', 'nah', 'avk', 'mar', 'guj', 'pan', 'kir', 'myv', 'prg', 'sux', 'crs', 'ckt', 'bak', 'zlm', 'hil', 'cbk', 'chr', 'nav', 'lkt', 'enm', 'arq', 'lin', 'abk', 'pcd', 'rom', 'gsw', 'tam', 'zul', 'awa', 'wln', 'amh', 'bar', 'hbo', 'mhr', 'bho', 'mrj', 'ckb', 'osx', 'pfl', 'mgm', 'sna', 'mah', 'hau', 'kan', 'nog', 'sin', 'glv', 'dng', 'kal', 'liv', 'vro', 'apc', 'jdt', 'fur', 'che', 'haw', 'yor', 'crh', 'pdc', 'ppl', 'kin', 'shs', 'mnw', 'tet', 'sah', 'kum', 'ngt', 'nya', 'pus', 'hif', 'mya', 'moh', 'wol', 'tir', 'ton', 'lzz', 'oar', 'lug', 'brx', 'non', 'mww', 'hak', 'nlv', 'ngu', 'bua', 'aym', 'vec', 'ibo', 'tkl', 'bam', 'kha', 'ceb', 'lou', 'fuc', 'smo', 'gag', 'lfn', 'arg', 'umb', 'tyv', 'kjh', 'oji', 'cyo', 'urh', 'kzj', 'pam', 'srd', 'lmo', 'swg', 'mdf', 'gil', 'snd', 'tso', 'sot', 'zza', 'tsn', 'pau', 'som', 'egl', 'ady', 'asm', 'ori', 'dtp', 'cho', 'max', 'kam', 'niu', 'sag', 'ilo', 'kaa', 'fuv', 'nch', 'hoc', 'iba', 'gbm', 'sun', 'war', 'mvv', 'pap', 'ary', 'kxi', 'csb', 'pag', 'cos', 'rif', 'kek', 'krc', 'aii', 'ban', 'ssw', 'tvl', 'mfe', 'tah', 'bvy', 'bcl', 'hnj', 'nau', 'nst', 'afb', 'quc', 'min', 'tmw', 'mad', 'bjn', 'mai', 'cjy', 'got', 'hsn', 'gan', 'tzl', 'dws', 'ldn', 'afh', 'sgs', 'krl', 'vep', 'rue', 'tly', 'mic', 'ext', 'izh', 'sma', 'jam', 'cmo', 'mwl', 'kpv', 'koi', 'bis', 'ike', 'run', 'evn', 'ryu', 'mnc', 'aoz', 'otk', 'kas', 'aln', 'akl', 'yua', 'shy', 'fkv', 'gos', 'fij', 'thv', 'zgh', 'gcf', 'cay', 'xmf', 'tig', 'div', 'lij', 'rap', 'hrx', 'cpi', 'tts', 'gaa', 'tmr', 'iii', 'ltg', 'bzt', 'syc', 'emx', 'gom', 'chg', 'osp', 'stq', 'frr', 'fro', 'nys', 'toi', 'new', 'phn', 'jpa', 'rel', 'drt', 'chn', 'pli', 'laa', 'bal', 'hdn', 'hax', 'mik', 'ajp', 'xqa', 'pal', 'crk', 'mni', 'lut', 'ayl', 'ood', 'sdh', 'ofs', 'nus', 'kiu', 'diq', 'qxq', 'alt', 'bfz', 'klj', 'mus', 'srn', 'guc', 'lim', 'zea', 'shi', 'mnr', 'bom', 'sat', 'szl'], id=None) Value(dtype='string', id=None) Using custom data configuration loretoparisi--tatoeba-sentences-7b2c5e991f398f39 Downloading and preparing dataset csv/loretoparisi--tatoeba-sentences to /root/.cache/huggingface/datasets/csv/loretoparisi--tatoeba-sentences-7b2c5e991f398f39/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519... Downloading data files: 100% 2/2 [00:18<00:00, 8.06s/it] Downloading data: 100% 391M/391M [00:13<00:00, 35.3MB/s] Downloading data: 100% 92.4M/92.4M [00:02<00:00, 36.5MB/s] Failed to read file '/root/.cache/huggingface/datasets/downloads/933132df9905194ea9faeb30cabca8c49318795612f6495fcb941a290191dd5d' with error <class 'ValueError'>: invalid literal for int() with base 10: 'cmn' --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens() TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe' During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) 15 frames /usr/local/lib/python3.7/dist-packages/pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._convert_tokens() ValueError: invalid literal for int() with base 10: 'cmn' ``` while loading without `features` it loads without errors ``` sentences = load_dataset("loretoparisi/tatoeba-sentences", data_files=data_files, delimiter='\t', column_names=['label', 'text'] ) ``` but the `label` col seems to be wrong (without the `ClassLabel` object): ``` sentences['train'].features {'label': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None)} ``` The dataset was https://huggingface.co/datasets/loretoparisi/tatoeba-sentences Dataset format is: ``` ces Nechci vědět, co je tam uvnitř. ces Kdo o tom chce slyšet? deu Tom sagte, er fühle sich nicht wohl. ber Mel-iyi-d anida-t tura ? hun Gondom lesz rá rögtön. ber Mel-iyi-d anida-tt tura ? deu Ich will dich nicht reden hören. ``` ### Expected behavior ```shell correctly load train and test files. ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4210/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4210/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4199
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4199/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4199/comments
https://api.github.com/repos/huggingface/datasets/issues/4199/events
https://github.com/huggingface/datasets/issues/4199
1,211,953,308
I_kwDODunzps5IPPCc
4,199
Cache miss during reload for datasets using image fetch utilities through map
{ "avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4", "events_url": "https://api.github.com/users/apsdehal/events{/privacy}", "followers_url": "https://api.github.com/users/apsdehal/followers", "following_url": "https://api.github.com/users/apsdehal/following{/other_user}", "gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/apsdehal", "id": 3616806, "login": "apsdehal", "node_id": "MDQ6VXNlcjM2MTY4MDY=", "organizations_url": "https://api.github.com/users/apsdehal/orgs", "received_events_url": "https://api.github.com/users/apsdehal/received_events", "repos_url": "https://api.github.com/users/apsdehal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions", "type": "User", "url": "https://api.github.com/users/apsdehal", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", ...
null
[ "Hi ! Maybe one of the objects in the function is not deterministic across sessions ? You can read more about it and how to investigate here: https://huggingface.co/docs/datasets/about_cache", "Hi @apsdehal! Can you verify that replacing\r\n```python\r\ndef fetch_single_image(image_url, timeout=None, retries=0):\...
2022-04-22T07:47:08Z
2022-04-26T17:00:32Z
2022-04-26T13:38:26Z
CONTRIBUTOR
null
null
null
null
## Describe the bug It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch. ## Steps to reproduce the bug Using the example provided in `red_caps` dataset. ```python from concurrent.futures import ThreadPoolExecutor from functools import partial import io import urllib import PIL.Image import datasets from datasets import load_dataset from datasets.utils.file_utils import get_datasets_user_agent def fetch_single_image(image_url, timeout=None, retries=0): for _ in range(retries + 1): try: request = urllib.request.Request( image_url, data=None, headers={"user-agent": get_datasets_user_agent()}, ) with urllib.request.urlopen(request, timeout=timeout) as req: image = PIL.Image.open(io.BytesIO(req.read())) break except Exception: image = None return image def fetch_images(batch, num_threads, timeout=None, retries=0): fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries) with ThreadPoolExecutor(max_workers=num_threads) as executor: batch["image"] = list(executor.map(lambda image_urls: [fetch_single_image_with_args(image_url) for image_url in image_urls], batch["image_url"])) return batch def process_image_urls(batch): processed_batch_image_urls = [] for image_url in batch["image_url"]: processed_example_image_urls = [] image_url_splits = re.findall(r"http\S+", image_url) for image_url_split in image_url_splits: if "imgur" in image_url_split and "," in image_url_split: for image_url_part in image_url_split.split(","): if not image_url_part: continue image_url_part = image_url_part.strip() root, ext = os.path.splitext(image_url_part) if not root.startswith("http"): root = "http://i.imgur.com/" + root root = root.split("#")[0] if not ext: ext = ".jpg" ext = re.split(r"[?%]", ext)[0] image_url_part = root + ext processed_example_image_urls.append(image_url_part) else: processed_example_image_urls.append(image_url_split) processed_batch_image_urls.append(processed_example_image_urls) batch["image_url"] = processed_batch_image_urls return batch dset = load_dataset("red_caps", "jellyfish") dset = dset.map(process_image_urls, batched=True, num_proc=4) features = dset["train"].features.copy() features["image"] = datasets.Sequence(datasets.Image()) num_threads = 5 dset = dset.map(fetch_images, batched=True, batch_size=50, features=features, fn_kwargs={"num_threads": num_threads}) ``` Run this in an interpretor or as a script twice and see that the cache is missed the second time. ## Expected results At reload there should not be any cache miss ## Actual results Every time script is run, cache is missed and dataset is built from scratch. ## Environment info - `datasets` version: 2.1.1.dev0 - Platform: Linux-4.19.0-20-cloud-amd64-x86_64-with-glibc2.10 - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4199/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4199/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4198
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4198/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4198/comments
https://api.github.com/repos/huggingface/datasets/issues/4198/events
https://github.com/huggingface/datasets/issues/4198
1,211,456,559
I_kwDODunzps5INVwv
4,198
There is no dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1625647?v=4", "events_url": "https://api.github.com/users/wilfoderek/events{/privacy}", "followers_url": "https://api.github.com/users/wilfoderek/followers", "following_url": "https://api.github.com/users/wilfoderek/following{/other_user}", "gists_url": "https://api.github.com/users/wilfoderek/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wilfoderek", "id": 1625647, "login": "wilfoderek", "node_id": "MDQ6VXNlcjE2MjU2NDc=", "organizations_url": "https://api.github.com/users/wilfoderek/orgs", "received_events_url": "https://api.github.com/users/wilfoderek/received_events", "repos_url": "https://api.github.com/users/wilfoderek/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wilfoderek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wilfoderek/subscriptions", "type": "User", "url": "https://api.github.com/users/wilfoderek", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2022-04-21T19:19:26Z
2022-05-03T11:29:05Z
2022-04-22T06:12:25Z
NONE
null
null
null
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4198/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4198/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4196
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4196/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4196/comments
https://api.github.com/repos/huggingface/datasets/issues/4196/events
https://github.com/huggingface/datasets/issues/4196
1,211,271,261
I_kwDODunzps5IMohd
4,196
Embed image and audio files in `save_to_disk`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2022-04-21T16:25:18Z
2022-12-14T18:22:59Z
2022-12-14T18:22:59Z
MEMBER
null
null
null
null
Following https://github.com/huggingface/datasets/pull/4184, currently a dataset saved using `save_to_disk` doesn't actually contain the bytes of the image or audio files. Instead it stores the path to your local files. Adding `embed_external_files` and set it to True by default to save_to_disk would be kind of a breaking change since some users will get bigger Arrow files when updating the lib, but the advantages are nice: - the resulting dataset is self contained, in case you want to delete your cache for example or share it with someone else - users also upload these Arrow files to cloud storage via the fs parameter, and in this case they would expect to upload a self-contained dataset - consistency with push_to_hub This can be implemented at the same time as sharding for `save_to_disk` for efficiency, and reuse the helpers from `push_to_hub` to embed the external files. cc @mariosasko
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 6, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 6, "url": "https://api.github.com/repos/huggingface/datasets/issues/4196/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4196/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4192
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4192/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4192/comments
https://api.github.com/repos/huggingface/datasets/issues/4192/events
https://github.com/huggingface/datasets/issues/4192
1,210,692,554
I_kwDODunzps5IKbPK
4,192
load_dataset can't load local dataset,Unable to find ...
{ "avatar_url": "https://avatars.githubusercontent.com/u/33253979?v=4", "events_url": "https://api.github.com/users/ahf876828330/events{/privacy}", "followers_url": "https://api.github.com/users/ahf876828330/followers", "following_url": "https://api.github.com/users/ahf876828330/following{/other_user}", "gists_url": "https://api.github.com/users/ahf876828330/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ahf876828330", "id": 33253979, "login": "ahf876828330", "node_id": "MDQ6VXNlcjMzMjUzOTc5", "organizations_url": "https://api.github.com/users/ahf876828330/orgs", "received_events_url": "https://api.github.com/users/ahf876828330/received_events", "repos_url": "https://api.github.com/users/ahf876828330/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ahf876828330/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ahf876828330/subscriptions", "type": "User", "url": "https://api.github.com/users/ahf876828330", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi! :)\r\n\r\nI believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json...
2022-04-21T08:28:58Z
2022-04-25T16:51:57Z
2022-04-22T07:39:53Z
NONE
null
null
null
null
Traceback (most recent call last): File "/home/gs603/ahf/pretrained/model.py", line 48, in <module> dataset = load_dataset("json",data_files="dataset/dataset_infos.json") File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset **config_kwargs, File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1496, in load_dataset_builder data_files=data_files, File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1155, in dataset_module_factory download_mode=download_mode, File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 800, in get_module data_files = DataFilesDict.from_local_or_remote(patterns, use_auth_token=self.downnload_config.use_auth_token) File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 582, in from_local_or_remote if not isinstance(patterns_for_key, DataFilesList) File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 544, in from_local_or_remote data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 194, in resolve_patterns_locally_or_by_urls for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions): File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 144, in _resolve_single_pattern_locally raise FileNotFoundError(error_msg) FileNotFoundError: Unable to find '/home/gs603/ahf/pretrained/dataset/dataset_infos.json' at /home/gs603/ahf/pretrained ![image](https://user-images.githubusercontent.com/33253979/164413285-84ea65ac-9126-408f-9cd2-ce4751a5dd73.png) ![image](https://user-images.githubusercontent.com/33253979/164413338-4735142f-408b-41d9-ab87-8484de2be54f.png) the code is in the model.py,why I can't use the load_dataset function to load my local dataset?
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4192/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4192/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4191
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4191/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4191/comments
https://api.github.com/repos/huggingface/datasets/issues/4191/events
https://github.com/huggingface/datasets/issues/4191
1,210,028,090
I_kwDODunzps5IH5A6
4,191
feat: create an `Array3D` column from a list of arrays of dimension 2
{ "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SaulLu", "id": 55560583, "login": "SaulLu", "node_id": "MDQ6VXNlcjU1NTYwNTgz", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "repos_url": "https://api.github.com/users/SaulLu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "type": "User", "url": "https://api.github.com/users/SaulLu", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Hi @SaulLu, thanks for your proposal.\r\n\r\nJust I got a bit confused about the dimensions...\r\n- For the 2D case, you mention it is possible to create an `Array2D` from a list of arrays of dimension 1\r\n- However, you give an example of creating an `Array2D` from arrays of dimension 2:\r\n - the values of `da...
2022-04-20T18:04:32Z
2022-05-12T15:08:40Z
2022-05-12T15:08:40Z
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** It is possible to create an `Array2D` column from a list of arrays of dimension 1. Similarly, I think it might be nice to be able to create a `Array3D` column from a list of lists of arrays of dimension 1. To illustrate my proposal, let's take the following toy dataset t: ```python import numpy as np from datasets import Dataset, features data_map = { 1: np.array([[0.2, 0,4],[0.19, 0,3]]), 2: np.array([[0.1, 0,4],[0.19, 0,3]]), } def create_toy_ds(): my_dict = {"id":[1, 2]} return Dataset.from_dict(my_dict) ds = create_toy_ds() ``` The following 2D processing works without any errors raised: ```python def prepare_dataset_2D(batch): batch["pixel_values"] = [data_map[index] for index in batch["id"]] return batch ds_2D = ds.map( prepare_dataset_2D, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array2D(shape=(2, 3), dtype="float32")}) ) ``` The following 3D processing doesn't work: ```python def prepare_dataset_3D(batch): batch["pixel_values"] = [[data_map[index]] for index in batch["id"]] return batch ds_3D = ds.map( prepare_dataset_3D, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3, dtype="float32")}) ) ``` The error raised is: ``` --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) [<ipython-input-6-676547e4cd41>](https://localhost:8080/#) in <module>() 3 batched=True, 4 remove_columns=ds.column_names, ----> 5 features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")}) 6 ) 12 frames [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 1971 new_fingerprint=new_fingerprint, 1972 disable_tqdm=disable_tqdm, -> 1973 desc=desc, 1974 ) 1975 else: [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 518 self: "Dataset" = kwargs.pop("self") 519 # apply actual function --> 520 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 521 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 522 for dataset in datasets: [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 485 } 486 # apply actual function --> 487 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 488 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 489 # re-apply format to the output [/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py](https://localhost:8080/#) in wrapper(*args, **kwargs) 456 # Call actual function 457 --> 458 out = func(self, *args, **kwargs) 459 460 # Update fingerprint of in-place transforms + update in-place history of transforms [/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2354 writer.write_table(batch) 2355 else: -> 2356 writer.write_batch(batch) 2357 if update_data and writer is not None: 2358 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file [/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in write_batch(self, batch_examples, writer_batch_size) 505 col_try_type = try_features[col] if try_features is not None and col in try_features else None 506 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col) --> 507 arrays.append(pa.array(typed_sequence)) 508 inferred_features[col] = typed_sequence.get_inferred_type() 509 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol() [/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in __arrow_array__(self, type) 175 storage = list_of_np_array_to_pyarrow_listarray(data, type=pa_type.value_type) 176 else: --> 177 storage = pa.array(data, pa_type.storage_dtype) 178 return pa.ExtensionArray.from_storage(pa_type, storage) 179 /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Can only convert 1-dimensional array values ``` **Describe the solution you'd like** No error in the second scenario and an identical result to the following snippets. **Describe alternatives you've considered** There are other alternatives that work such as: ```python def prepare_dataset_3D_bis(batch): batch["pixel_values"] = [[data_map[index].tolist()] for index in batch["id"]] return batch ds_3D_bis = ds.map( prepare_dataset_3D_bis, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")}) ) ``` or ```python def prepare_dataset_3D_ter(batch): batch["pixel_values"] = [data_map[index][np.newaxis, :, :] for index in batch["id"]] return batch ds_3D_ter = ds.map( prepare_dataset_3D_ter, batched=True, remove_columns=ds.column_names, features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")}) ) ``` But both solutions require the user to be aware that `data_map[index]` is an `np.array` type. cc @lhoestq as we discuss this offline :smile:
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4191/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4191/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4185
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4185/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4185/comments
https://api.github.com/repos/huggingface/datasets/issues/4185/events
https://github.com/huggingface/datasets/issues/4185
1,209,429,743
I_kwDODunzps5IFm7v
4,185
Librispeech documentation, clarification on format
{ "avatar_url": "https://avatars.githubusercontent.com/u/59132?v=4", "events_url": "https://api.github.com/users/albertz/events{/privacy}", "followers_url": "https://api.github.com/users/albertz/followers", "following_url": "https://api.github.com/users/albertz/following{/other_user}", "gists_url": "https://api.github.com/users/albertz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertz", "id": 59132, "login": "albertz", "node_id": "MDQ6VXNlcjU5MTMy", "organizations_url": "https://api.github.com/users/albertz/orgs", "received_events_url": "https://api.github.com/users/albertz/received_events", "repos_url": "https://api.github.com/users/albertz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertz/subscriptions", "type": "User", "url": "https://api.github.com/users/albertz", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "(@patrickvonplaten )", "Also cc @lhoestq here", "The documentation in the code is definitely outdated - thanks for letting me know, I'll remove it in https://github.com/huggingface/datasets/pull/4184 .\r\n\r\nYou're exactly right `audio` `array` already decodes the audio file to the correct waveform. This is d...
2022-04-20T09:35:55Z
2022-04-21T11:00:53Z
null
NONE
null
null
null
null
https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53 > Note that in order to limit the required storage for preparing this dataset, the audio > is stored in the .flac format and is not converted to a float32 array. To convert, the audio > file to a float32 array, please make use of the `.map()` function as follows: > > ```python > import soundfile as sf > def map_to_array(batch): > speech_array, _ = sf.read(batch["file"]) > batch["speech"] = speech_array > return batch > dataset = dataset.map(map_to_array, remove_columns=["file"]) > ``` Is this still true? In my case, `ds["train.100"]` returns: ``` Dataset({ features: ['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'], num_rows: 28539 }) ``` and taking the first instance yields: ``` {'file': '374-180298-0000.flac', 'audio': {'path': '374-180298-0000.flac', 'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ..., -2.74658203e-04, -1.83105469e-04, -3.05175781e-05]), 'sampling_rate': 16000}, 'text': 'CHAPTER SIXTEEN I MIGHT HAVE TOLD YOU OF THE BEGINNING OF THIS LIAISON IN A FEW LINES BUT I WANTED YOU TO SEE EVERY STEP BY WHICH WE CAME I TO AGREE TO WHATEVER MARGUERITE WISHED', 'speaker_id': 374, 'chapter_id': 180298, 'id': '374-180298-0000'} ``` The `audio` `array` seems to be already decoded. So such convert/decode code as mentioned in the doc is wrong? But I wonder, is it actually stored as flac on disk, and the decoding is done on-the-fly? Or was it decoded already during the preparation and is stored as raw samples on disk? Note that I also used `datasets.load_dataset("librispeech_asr", "clean").save_to_disk(...)` and then `datasets.load_from_disk(...)` in this example. Does this change anything on how it is stored on disk? A small related question: Actually I would prefer to even store it as mp3 or ogg on disk. Is this easy to convert?
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4185/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4185/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4182
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4182/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4182/comments
https://api.github.com/repos/huggingface/datasets/issues/4182/events
https://github.com/huggingface/datasets/issues/4182
1,208,285,235
I_kwDODunzps5IBPgz
4,182
Zenodo.org download is not responding
{ "avatar_url": "https://avatars.githubusercontent.com/u/32985207?v=4", "events_url": "https://api.github.com/users/dkajtoch/events{/privacy}", "followers_url": "https://api.github.com/users/dkajtoch/followers", "following_url": "https://api.github.com/users/dkajtoch/following{/other_user}", "gists_url": "https://api.github.com/users/dkajtoch/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dkajtoch", "id": 32985207, "login": "dkajtoch", "node_id": "MDQ6VXNlcjMyOTg1MjA3", "organizations_url": "https://api.github.com/users/dkajtoch/orgs", "received_events_url": "https://api.github.com/users/dkajtoch/received_events", "repos_url": "https://api.github.com/users/dkajtoch/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dkajtoch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dkajtoch/subscriptions", "type": "User", "url": "https://api.github.com/users/dkajtoch", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "[Off topic but related: Is the uptime of S3 provably better than Zenodo's?]", "Hi @dkajtoch, please note that at HuggingFace we are not hosting this dataset: we are just using a script to download their data file and create a dataset from it.\r\n\r\nIt was the dataset owners decision to host their data at Zenodo...
2022-04-19T12:26:57Z
2022-04-20T07:11:05Z
2022-04-20T07:11:05Z
CONTRIBUTOR
null
null
null
null
## Describe the bug Source download_url from zenodo.org does not respond. `_DOWNLOAD_URL = "https://zenodo.org/record/2787612/files/SICK.zip?download=1"` Other datasets also use zenodo.org to store data and they cannot be downloaded as well. It would be better to actually use more reliable way to store original data like s3 bucket. ## Steps to reproduce the bug ```python load_dataset("sick") ``` ## Expected results Dataset should be downloaded. ## Actual results ConnectionError: Couldn't reach https://zenodo.org/record/2787612/files/SICK.zip?download=1 (ReadTimeout(ReadTimeoutError("HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out. (read timeout=100)"))) ## Environment info - `datasets` version: 2.1.0 - Platform: Darwin-21.4.0-x86_64-i386-64bit - Python version: 3.7.11 - PyArrow version: 7.0.0 - Pandas version: 1.3.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/32985207?v=4", "events_url": "https://api.github.com/users/dkajtoch/events{/privacy}", "followers_url": "https://api.github.com/users/dkajtoch/followers", "following_url": "https://api.github.com/users/dkajtoch/following{/other_user}", "gists_url": "https://api.github.com/users/dkajtoch/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dkajtoch", "id": 32985207, "login": "dkajtoch", "node_id": "MDQ6VXNlcjMyOTg1MjA3", "organizations_url": "https://api.github.com/users/dkajtoch/orgs", "received_events_url": "https://api.github.com/users/dkajtoch/received_events", "repos_url": "https://api.github.com/users/dkajtoch/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dkajtoch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dkajtoch/subscriptions", "type": "User", "url": "https://api.github.com/users/dkajtoch", "user_view_type": "public" }
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4182/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4182/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4181
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4181/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4181/comments
https://api.github.com/repos/huggingface/datasets/issues/4181/events
https://github.com/huggingface/datasets/issues/4181
1,208,194,805
I_kwDODunzps5IA5b1
4,181
Support streaming FLEURS dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[ "Yes, you just have to use `dl_manager.iter_archive` instead of `dl_manager.download_and_extract`.\r\n\r\nThat's because `download_and_extract` doesn't support TAR archives in streaming mode.", "Tried to make it streamable, but I don't think it's really possible. @lhoestq @polinaeterna maybe you guys can check: \...
2022-04-19T11:09:56Z
2022-07-25T11:44:02Z
2022-07-25T11:44:02Z
CONTRIBUTOR
null
null
null
null
## Dataset viewer issue for '*name of the dataset*' https://huggingface.co/datasets/google/fleurs ``` Status code: 400 Exception: NotImplementedError Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead. ``` Am I the one who added this dataset ? Yes Can I fix this somehow in the script? @lhoestq @severo
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4181/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4181/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4180
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4180/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4180/comments
https://api.github.com/repos/huggingface/datasets/issues/4180/events
https://github.com/huggingface/datasets/issues/4180
1,208,042,320
I_kwDODunzps5IAUNQ
4,180
Add some iteration method on a dataset column (specific for inference)
{ "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Narsil", "id": 204321, "login": "Narsil", "node_id": "MDQ6VXNlcjIwNDMyMQ==", "organizations_url": "https://api.github.com/users/Narsil/orgs", "received_events_url": "https://api.github.com/users/Narsil/received_events", "repos_url": "https://api.github.com/users/Narsil/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "type": "User", "url": "https://api.github.com/users/Narsil", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Thanks for the suggestion ! I agree it would be nice to have something directly in `datasets` to do something as simple as that\r\n\r\ncc @albertvillanova @mariosasko @polinaeterna What do you think if we have something similar to pandas `Series` that wouldn't bring everything in memory when doing `dataset[\"audio...
2022-04-19T09:15:45Z
2025-06-17T13:08:50Z
2025-06-17T13:08:50Z
CONTRIBUTOR
null
null
null
null
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Currently, `dataset["audio"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset. Having an iterator (or sequence) type of object, would make inference with `transformers` 's `pipeline` easier to use and not so memory hungry. **Describe the solution you'd like** A clear and concise description of what you want to happen. For a non breaking change: ```python for audio in dataset.iterate("audio"): # {"array": np.array(...), "sampling_rate":...} ``` For a breaking change solution (not necessary), changing the type of `dataset["audio"]` to a sequence type so that ```python pipe = pipeline(model="...") for out in pipe(dataset["audio"]): # {"text":....} ``` could work **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. ```python def iterate(dataset, key): for item in dataset: yield dataset[key] for out in pipeline(iterate(dataset, "audio")): # {"array": ...} ``` This works but requires the helper function which feels slightly clunky. **Additional context** Add any other context about the feature request here. The context is actually to showcase better integration between `pipeline` and `datasets` in the Quicktour demo: https://github.com/huggingface/transformers/pull/16723/files @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4180/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4180/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/4179
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4179/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4179/comments
https://api.github.com/repos/huggingface/datasets/issues/4179/events
https://github.com/huggingface/datasets/issues/4179
1,208,001,118
I_kwDODunzps5IAKJe
4,179
Dataset librispeech_asr fails to load
{ "avatar_url": "https://avatars.githubusercontent.com/u/59132?v=4", "events_url": "https://api.github.com/users/albertz/events{/privacy}", "followers_url": "https://api.github.com/users/albertz/followers", "following_url": "https://api.github.com/users/albertz/following{/other_user}", "gists_url": "https://api.github.com/users/albertz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertz", "id": 59132, "login": "albertz", "node_id": "MDQ6VXNlcjU5MTMy", "organizations_url": "https://api.github.com/users/albertz/orgs", "received_events_url": "https://api.github.com/users/albertz/received_events", "repos_url": "https://api.github.com/users/albertz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertz/subscriptions", "type": "User", "url": "https://api.github.com/users/albertz", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "@patrickvonplaten Hi! I saw that you prepared this? :)", "Another thing, but maybe this should be a separate issue: As I see from the code, it would try to use up to 16 simultaneous downloads? This is problematic for Librispeech or anything on OpenSLR. On [the homepage](https://www.openslr.org/), it says:\r\n\r\...
2022-04-19T08:45:48Z
2022-07-27T16:10:00Z
2022-07-27T16:10:00Z
NONE
null
null
null
null
## Describe the bug The dataset librispeech_asr (standard Librispeech) fails to load. ## Steps to reproduce the bug ```python datasets.load_dataset("librispeech_asr") ``` ## Expected results It should download and prepare the whole dataset (all subsets). In [the doc](https://huggingface.co/datasets/librispeech_asr), it says it has two configurations (clean and other). However, the dataset doc says that not specifying `split` should just load the whole dataset, which is what I want. Also, in case of this specific dataset, this is also the standard what the community uses. When you look at any publications with results on Librispeech, they always use the whole train dataset for training. ## Actual results ``` ... File "/home/az/.cache/huggingface/modules/datasets_modules/datasets/librispeech_asr/1f4602f6b5fed8d3ab3e3382783173f2e12d9877e98775e34d7780881175096c/librispeech_asr.py", line 119, in LibrispeechASR._split_generators line: archive_path = dl_manager.download(_DL_URLS[self.config.name]) locals: archive_path = <not found> dl_manager = <local> <datasets.utils.download_manager.DownloadManager object at 0x7fc07b426160> dl_manager.download = <local> <bound method DownloadManager.download of <datasets.utils.download_manager.DownloadManager object at 0x7fc07b426160>> _DL_URLS = <global> {'clean': {'dev': 'http://www.openslr.org/resources/12/dev-clean.tar.gz', 'test': 'http://www.openslr.org/resources/12/test-clean.tar.gz', 'train.100': 'http://www.openslr.org/resources/12/train-clean-100.tar.gz', 'train.360': 'http://www.openslr.org/resources/12/train-clean-360.tar.gz'}, 'other'... self = <local> <datasets_modules.datasets.librispeech_asr.1f4602f6b5fed8d3ab3e3382783173f2e12d9877e98775e34d7780881175096c.librispeech_asr.LibrispeechASR object at 0x7fc12a633310> self.config = <local> BuilderConfig(name='default', version=0.0.0, data_dir='/home/az/i6/setups/2022-03-20--sis/work/i6_core/datasets/huggingface/DownloadAndPrepareHuggingFaceDatasetJob.TV6Nwm6dFReF/output/data_dir', data_files=None, description=None) self.config.name = <local> 'default', len = 7 KeyError: 'default' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.1.0 - Platform: Linux-5.4.0-107-generic-x86_64-with-glibc2.31 - Python version: 3.9.9 - PyArrow version: 6.0.1 - Pandas version: 1.4.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4179/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4179/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false