url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
758M
1.95B
node_id
stringlengths
18
32
number
int64
1.2k
6.31k
title
stringlengths
1
290
user
dict
labels
listlengths
0
3
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
listlengths
0
30
created_at
timestamp[ns, tz=UTC]
updated_at
timestamp[ns, tz=UTC]
closed_at
timestamp[ns, tz=UTC]
author_association
stringclasses
3 values
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
36.2k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6236
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6236/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6236/comments
https://api.github.com/repos/huggingface/datasets/issues/6236/events
https://github.com/huggingface/datasets/issues/6236
1,893,648,480
I_kwDODunzps5w3shg
6,236
Support buffer shuffle for to_tf_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7635551?v=4", "events_url": "https://api.github.com/users/EthanRock/events{/privacy}", "followers_url": "https://api.github.com/users/EthanRock/followers", "following_url": "https://api.github.com/users/EthanRock/following{/other_user}", "gists_url": "https://api.github.com/users/EthanRock/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/EthanRock", "id": 7635551, "login": "EthanRock", "node_id": "MDQ6VXNlcjc2MzU1NTE=", "organizations_url": "https://api.github.com/users/EthanRock/orgs", "received_events_url": "https://api.github.com/users/EthanRock/received_events", "repos_url": "https://api.github.com/users/EthanRock/repos", "site_admin": false, "starred_url": "https://api.github.com/users/EthanRock/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/EthanRock/subscriptions", "type": "User", "url": "https://api.github.com/users/EthanRock" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "cc @Rocketknight1 ", "Hey! You can implement this yourself, just:\r\n\r\n1) Create the dataset with `to_tf_dataset()` with `shuffle=False`\r\n2) Add an `unbatch()` at the end (or use batch_size=1)\r\n3) Add a `shuffle()` to the resulting dataset with your desired buffer size\r\n4) Add a `batch()` at the end agai...
2023-09-13T03:19:44Z
2023-09-18T01:11:21Z
null
NONE
null
null
null
### Feature request I'm using to_tf_dataset to convert a large dataset to tf.data.Dataset and use Keras fit to train model. Currently, to_tf_dataset only supports full size shuffle, which can be very slow on large dataset. tf.data.Dataset support buffer shuffle by default. shuffle( buffer_size, seed=None, reshuffle_each_iteration=None, name=None ) ### Motivation I'm very frustrated to find the loading with shuffling large dataset is very slow. It seems impossible to shuffle before training Keras with big dataset. ### Your contribution NA
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6236/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6236/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6275/comments
https://api.github.com/repos/huggingface/datasets/issues/6275/events
https://github.com/huggingface/datasets/issues/6275
1,921,354,680
I_kwDODunzps5yhYu4
6,275
Would like to Contribute a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/97907750?v=4", "events_url": "https://api.github.com/users/vikas70607/events{/privacy}", "followers_url": "https://api.github.com/users/vikas70607/followers", "following_url": "https://api.github.com/users/vikas70607/following{/other_user}", "gists_url": "https://api.github.com/users/vikas70607/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vikas70607", "id": 97907750, "login": "vikas70607", "node_id": "U_kgDOBdX0Jg", "organizations_url": "https://api.github.com/users/vikas70607/orgs", "received_events_url": "https://api.github.com/users/vikas70607/received_events", "repos_url": "https://api.github.com/users/vikas70607/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vikas70607/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vikas70607/subscriptions", "type": "User", "url": "https://api.github.com/users/vikas70607" }
[]
closed
false
null
[]
null
[ "Hi! The process of contributing a dataset is explained here: https://huggingface.co/docs/datasets/upload_dataset. Also, check https://huggingface.co/docs/datasets/image_dataset for a more detailed explanation of how to share an image dataset." ]
2023-10-02T07:00:21Z
2023-10-10T16:27:54Z
2023-10-10T16:27:54Z
NONE
null
null
null
I have a dataset of 2500 images that can be used for color-blind machine-learning algorithms. Since , there was no dataset available online , I made this dataset myself and would like to contribute this now to community
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6275/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6275/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6251
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6251/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6251/comments
https://api.github.com/repos/huggingface/datasets/issues/6251/events
https://github.com/huggingface/datasets/pull/6251
1,904,418,426
PR_kwDODunzps5awQsy
6,251
Support streaming datasets with pyarrow.parquet.read_table
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "This function reads an entire Arrow table in one go, which is not ideal memory-wise, so I don't think we should encourage using this function, considering we want to keep RAM usage as low as possible in the streaming mode. \r\n\r\n(N...
2023-09-20T08:07:02Z
2023-09-27T06:37:03Z
2023-09-27T06:26:24Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6251.diff", "html_url": "https://github.com/huggingface/datasets/pull/6251", "merged_at": "2023-09-27T06:26:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/6251.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6251" }
Support streaming datasets with `pyarrow.parquet.read_table`. See: https://huggingface.co/datasets/uonlp/CulturaX/discussions/2 CC: @AndreaFrancis
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6251/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6251/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2983
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2983/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2983/comments
https://api.github.com/repos/huggingface/datasets/issues/2983/events
https://github.com/huggingface/datasets/pull/2983
1,010,263,058
PR_kwDODunzps4saw_v
2,983
added SwissJudgmentPrediction dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4", "events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}", "followers_url": "https://api.github.com/users/JoelNiklaus/followers", "following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}", "gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JoelNiklaus", "id": 3775944, "login": "JoelNiklaus", "node_id": "MDQ6VXNlcjM3NzU5NDQ=", "organizations_url": "https://api.github.com/users/JoelNiklaus/orgs", "received_events_url": "https://api.github.com/users/JoelNiklaus/received_events", "repos_url": "https://api.github.com/users/JoelNiklaus/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions", "type": "User", "url": "https://api.github.com/users/JoelNiklaus" }
[]
closed
false
null
[]
null
[]
2021-09-28T22:17:56Z
2021-10-01T16:03:05Z
2021-10-01T16:03:05Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2983.diff", "html_url": "https://github.com/huggingface/datasets/pull/2983", "merged_at": "2021-10-01T16:03:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/2983.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2983" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2983/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2983/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3625
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3625/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3625/comments
https://api.github.com/repos/huggingface/datasets/issues/3625/events
https://github.com/huggingface/datasets/issues/3625
1,113,017,522
I_kwDODunzps5CV0yy
3,625
Add a metadata field for when source data was produced
{ "avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4", "events_url": "https://api.github.com/users/davanstrien/events{/privacy}", "followers_url": "https://api.github.com/users/davanstrien/followers", "following_url": "https://api.github.com/users/davanstrien/following{/other_user}", "gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davanstrien", "id": 8995957, "login": "davanstrien", "node_id": "MDQ6VXNlcjg5OTU5NTc=", "organizations_url": "https://api.github.com/users/davanstrien/orgs", "received_events_url": "https://api.github.com/users/davanstrien/received_events", "repos_url": "https://api.github.com/users/davanstrien/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions", "type": "User", "url": "https://api.github.com/users/davanstrien" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
[ "A question to the datasets maintainers: is there a policy about how the set of allowed metadata fields is maintained and expanded?\r\n\r\nMetadata are very important, but defining the standard is always a struggle between allowing exhaustivity without being too complex. Archivists have Dublin Core, open data has h...
2022-01-24T18:52:39Z
2022-06-28T13:54:49Z
null
MEMBER
null
null
null
**Is your feature request related to a problem? Please describe.** The current problem is that information about when source data was produced is not easily visible. Though there are a variety of metadata fields available in the dataset viewer, time period information is not included. This feature request suggests making metadata relating to the time that the underlying *source* data was produced more prominent and outlines why this specific information is of particular importance, both in domain-specific historic research and more broadly. **Describe the solution you'd like** There are a variety of metadata fields exposed in the dataset viewer (license, task categories, etc.) These fields make this metadata more prominent both for human users and as potentially machine-actionable information (for example, through the API). I would propose to add a metadata field that says when some underlying data was produced. For example, a dataset would be labelled as being produced between `1800-1900`. **Describe alternatives you've considered** This information is sometimes available in the Datacard or a paper describing the dataset. However, it's often not that easy to identify or extract this information, particularly if you want to use this field as a filter to identify relevant datasets. **Additional context** I believe this feature is relevant for a number of reasons: - Increasingly, there is an interest in using historical data for training language models (for example, https://huggingface.co/dbmdz/bert-base-historic-dutch-cased), and datasets to support this task (for example, https://huggingface.co/datasets/bnl_newspapers). For these datasets, indicating the time periods covered is particularly relevant. - More broadly, time is likely a common source of domain drift. Datasets of movie reviews from the 90s may not work well for recent movie reviews. As the documentation and long-term management of ML data become more of a priority, quickly understanding the time when the underlying text (or other data types) is arguably more important. - time-series data: datasets are adding more support for time series data. Again, the periods covered might be particularly relevant here. **open questions** - I think some of my points above apply not only to the underlying data but also to annotations. As a result, there could also be an argument for encoding this information somewhere. However, I would argue (but could be persuaded otherwise) that this is probably less important for filtering. This type of context is already addressed in the datasheets template and often requires more narrative to discuss. - what level of granularity would make sense for this? e.g. assigning a decade, century or year? - how to encode this information? What formatting makes sense - what specific time to encode; a data range? (mean, modal, min, max value?) This is a slightly amorphous feature request - I would be happy to discuss further/try and propose a more concrete solution if this seems like something that could be worth considering. I realise this might also touch on other parts of the 🤗 hubs ecosystem.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3625/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3625/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4741
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4741/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4741/comments
https://api.github.com/repos/huggingface/datasets/issues/4741/events
https://github.com/huggingface/datasets/pull/4741
1,316,621,272
PR_kwDODunzps48B2fl
4,741
Fix to dict conversion of `DatasetInfo`/`Features`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-07-25T10:41:27Z
2022-07-25T12:50:36Z
2022-07-25T12:37:53Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4741.diff", "html_url": "https://github.com/huggingface/datasets/pull/4741", "merged_at": "2022-07-25T12:37:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/4741.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4741" }
Fix #4681
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4741/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4741/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5313
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5313/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5313/comments
https://api.github.com/repos/huggingface/datasets/issues/5313/events
https://github.com/huggingface/datasets/pull/5313
1,468,484,136
PR_kwDODunzps5D6Qfb
5,313
Fix description of streaming in the docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-29T18:00:28Z
2022-12-01T14:55:30Z
2022-12-01T14:00:34Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5313.diff", "html_url": "https://github.com/huggingface/datasets/pull/5313", "merged_at": "2022-12-01T14:00:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/5313.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5313" }
We say that "the data is being downloaded progressively" which is not true, it's just streamed, so I fixed it. Probably I missed some other places where it is written? Also changed docstrings for `StreamingDownloadManager`'s `download` and `extract` to reflect the same, as these docstrings are displayed in the documentation cc @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5313/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5313/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4047
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4047/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4047/comments
https://api.github.com/repos/huggingface/datasets/issues/4047/events
https://github.com/huggingface/datasets/issues/4047
1,183,789,237
I_kwDODunzps5GjzC1
4,047
Dataset.unique(column: str) -> ArrowNotImplementedError
{ "avatar_url": "https://avatars.githubusercontent.com/u/1461936?v=4", "events_url": "https://api.github.com/users/orkenstein/events{/privacy}", "followers_url": "https://api.github.com/users/orkenstein/followers", "following_url": "https://api.github.com/users/orkenstein/following{/other_user}", "gists_url": "https://api.github.com/users/orkenstein/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/orkenstein", "id": 1461936, "login": "orkenstein", "node_id": "MDQ6VXNlcjE0NjE5MzY=", "organizations_url": "https://api.github.com/users/orkenstein/orgs", "received_events_url": "https://api.github.com/users/orkenstein/received_events", "repos_url": "https://api.github.com/users/orkenstein/repos", "site_admin": false, "starred_url": "https://api.github.com/users/orkenstein/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orkenstein/subscriptions", "type": "User", "url": "https://api.github.com/users/orkenstein" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi @orkenstein, thanks for reporting.\r\n\r\nPlease note that for this case, our `datasets` library uses under the hood the Apache Arrow `unique` function: https://arrow.apache.org/docs/python/generated/pyarrow.compute.unique.html#pyarrow.compute.unique\r\n\r\nAnd currently the Apache Arrow `unique` function is on...
2022-03-28T17:59:32Z
2022-04-01T18:24:57Z
2022-04-01T18:24:57Z
NONE
null
null
null
## Describe the bug I'm trying to use `unique()` function, but it fails ## Steps to reproduce the bug 1. Get dataset 2. Call `unique` 3. Error # Sample code to reproduce the bug ```python !pip show datasets from datasets import load_dataset dataset = load_dataset('wikiann', 'en') dataset['train'].column_names dataset['train'].unique(dataset['train'].column_names[0]) ``` ## Expected results It would be nice to actually see unique items ## Actual results Error: ```python --------------------------------------------------------------------------- ArrowNotImplementedError Traceback (most recent call last) [<ipython-input-10-5e0de07ed42c>](https://s0qyv2vjaji-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20220324-060046-RC00_436956229#) in <module>() 6 7 dataset['train'].column_names ----> 8 dataset['train'].unique(dataset['train'].column_names[0]) 5 frames /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowNotImplementedError: Function unique has no kernel matching input types (array[list<item: string>]) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Google Collab - Python version: 3.7.13 - PyArrow version: 6.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4047/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4047/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5289
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5289/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5289/comments
https://api.github.com/repos/huggingface/datasets/issues/5289/events
https://github.com/huggingface/datasets/pull/5289
1,462,543,139
PR_kwDODunzps5Dmrk9
5,289
Added support for JXL images.
{ "avatar_url": "https://avatars.githubusercontent.com/u/445208?v=4", "events_url": "https://api.github.com/users/alexjc/events{/privacy}", "followers_url": "https://api.github.com/users/alexjc/followers", "following_url": "https://api.github.com/users/alexjc/following{/other_user}", "gists_url": "https://api.github.com/users/alexjc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alexjc", "id": 445208, "login": "alexjc", "node_id": "MDQ6VXNlcjQ0NTIwOA==", "organizations_url": "https://api.github.com/users/alexjc/orgs", "received_events_url": "https://api.github.com/users/alexjc/received_events", "repos_url": "https://api.github.com/users/alexjc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alexjc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexjc/subscriptions", "type": "User", "url": "https://api.github.com/users/alexjc" }
[]
open
false
null
[]
null
[ "I'm fine with the addition of jxl in the list of known image extensions, this way users that have the plugin can work with their JXL datasets. WDYT @mariosasko ?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5289). All of your documentation changes will be reflected on ...
2022-11-23T23:16:33Z
2022-11-29T18:49:46Z
null
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5289.diff", "html_url": "https://github.com/huggingface/datasets/pull/5289", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5289.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5289" }
JPEG-XL is the most advanced of the next-generation of image codecs, supporting both lossless and lossy files — with better compression and quality than PNG and JPG respectively. It has reduced the disk sizes and bandwidth required for many of the datasets I use. Pillow does not yet support JXL, but there's a plugin as a separate Python library that does (`pip install jxlpy`), and I've tested that this change works as expected when the plugin is imported. Dataset used for testing, you must `git pull` as loading it from Python won't work until `datasets-server` is also changed to support JXL files: https://huggingface.co/datasets/texturedesign/td01_natural-ground-textures The case where the plugin is not imported first raises an error: ``` PIL.UnidentifiedImageError: cannot identify image file 'td01/train/set01/01_145523.jxl' ``` In order to enable support for JXL even before pillow supports this, should this exception be handled with a better error message? I'd expect/hope JXL support to follow in one of the pillow quarterly releases in the next 6-9 months.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5289/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5289/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5154
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5154/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5154/comments
https://api.github.com/repos/huggingface/datasets/issues/5154/events
https://github.com/huggingface/datasets/pull/5154
1,421,161,992
PR_kwDODunzps5BbpQZ
5,154
Test latest fsspec in CI
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "actually the latest fsspec is already installed " ]
2022-10-24T17:18:13Z
2023-09-24T10:06:06Z
2022-10-25T09:30:45Z
MEMBER
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/5154.diff", "html_url": "https://github.com/huggingface/datasets/pull/5154", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5154.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5154" }
Following the discussion in https://discuss.huggingface.co/t/attributeerror-module-fsspec-has-no-attribute-asyn/19255 I think we need to test the latest fsspec in the CI
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5154/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5154/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3614
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3614/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3614/comments
https://api.github.com/repos/huggingface/datasets/issues/3614/events
https://github.com/huggingface/datasets/pull/3614
1,110,736,657
PR_kwDODunzps4xZdCe
3,614
Minor fixes
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
2022-01-21T17:48:44Z
2022-01-24T12:45:49Z
2022-01-24T12:45:49Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3614.diff", "html_url": "https://github.com/huggingface/datasets/pull/3614", "merged_at": "2022-01-24T12:45:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/3614.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3614" }
This PR: * adds "desc" to the `ignore_kwargs` list in `Dataset.filter` * fixes the default value of `id` in `DatasetDict.prepare_for_task`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3614/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3614/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5127
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5127/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5127/comments
https://api.github.com/repos/huggingface/datasets/issues/5127/events
https://github.com/huggingface/datasets/pull/5127
1,411,897,544
PR_kwDODunzps5A8m-Q
5,127
[WIP] WebDataset export
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5127). All of your documentation changes will be reflected on that endpoint.", "Should we close this PR?" ]
2022-10-17T16:50:22Z
2023-09-24T10:11:36Z
null
MEMBER
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/5127.diff", "html_url": "https://github.com/huggingface/datasets/pull/5127", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5127.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5127" }
I added a first draft of the `IterableDataset.to_wds` method. You can use it to savea dataset loaded in streamign mode as a webdataset locally. The API can be further improved to allow to export to a cloud storage like the HF Hub. I also included sharding with a default max shard size of 500MB (uncompressed), and it is single-processed fo rnow. Choosing the number of shards is not implemented yet - though if we know the size of the `IterableDataset` this is probably doable`. For example ```python >>> from datasets import load_dataset >>> ds = load_dataset("rotten_tomatoes", split="train", streaming=True) >>> ds.to_wds("output_dir", compress=True) >>> import webdataset as wds >>> ds = wds.WebDataset("output_dir/rotten_tomatoes-train-000000.tar.gz").decode() >>> next(iter(ds)) {'__key__': '0', '__url__': 'output_dir/rotten_tomatoes-train-000000.tar.gz', 'label.cls': 1, 'text.txt': 'the rock is destined to be the 21st century\'s new ..., jean-claud van damme or steven segal .'} ``` ### Implementation details The WebDataset format is made of TAR archives containing a series of files per example. For example one pair of `image.jpg` and `label.cls` for image classification. WebDataset automatically decodes serialized data based on the extension of the files, and output a dictionary. For example `{"image.png": np.array(...), "label.cls": 0}` if you choose the numpy decoding. To use the automatic decoding, I store each field of each example as a file with its corresponding extension (jpg, json, cls, etc.) While this is useful to end up with a dictionary with one key per column and appropriate decoding, it can create huge TAR archives if the dataset is made of small samples of text - probably because of useless TAR metadata for each file. This also makes loading super slow: iterating on SQuAD takes 50sec vs 7sec using `datasets` in streaming mode. I haven't taken a look at alternatives for text datasets made out of small samples, but for image datasets this can already be used to run some benchmarks.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5127/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5127/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1565
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1565/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1565/comments
https://api.github.com/repos/huggingface/datasets/issues/1565/events
https://github.com/huggingface/datasets/pull/1565
766,333,940
MDExOlB1bGxSZXF1ZXN0NTM5Mzg2MzEx
1,565
Create README.md
{ "avatar_url": "https://avatars.githubusercontent.com/u/43467008?v=4", "events_url": "https://api.github.com/users/ManuelFay/events{/privacy}", "followers_url": "https://api.github.com/users/ManuelFay/followers", "following_url": "https://api.github.com/users/ManuelFay/following{/other_user}", "gists_url": "https://api.github.com/users/ManuelFay/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ManuelFay", "id": 43467008, "login": "ManuelFay", "node_id": "MDQ6VXNlcjQzNDY3MDA4", "organizations_url": "https://api.github.com/users/ManuelFay/orgs", "received_events_url": "https://api.github.com/users/ManuelFay/received_events", "repos_url": "https://api.github.com/users/ManuelFay/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ManuelFay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ManuelFay/subscriptions", "type": "User", "url": "https://api.github.com/users/ManuelFay" }
[]
closed
false
null
[]
null
[ "@ManuelFay thanks you so much for adding a dataset card, this is such a cool contribution!\r\n\r\nThis looks like it uses an old template for the card we've moved things around a bit and we have an app you should be using to get the tags and the structure of the Data Fields paragraph :) Would you mind moving your ...
2020-12-14T11:40:23Z
2021-03-25T14:01:49Z
2021-03-25T14:01:49Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1565.diff", "html_url": "https://github.com/huggingface/datasets/pull/1565", "merged_at": "2021-03-25T14:01:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/1565.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1565" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1565/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1565/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5810
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5810/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5810/comments
https://api.github.com/repos/huggingface/datasets/issues/5810/events
https://github.com/huggingface/datasets/pull/5810
1,689,917,822
PR_kwDODunzps5PdJHI
5,810
Add `fn_kwargs` to `map` and `filter` of `IterableDataset` and `IterableDatasetDict`
{ "avatar_url": "https://avatars.githubusercontent.com/u/3927621?v=4", "events_url": "https://api.github.com/users/yuukicammy/events{/privacy}", "followers_url": "https://api.github.com/users/yuukicammy/followers", "following_url": "https://api.github.com/users/yuukicammy/following{/other_user}", "gists_url": "https://api.github.com/users/yuukicammy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yuukicammy", "id": 3927621, "login": "yuukicammy", "node_id": "MDQ6VXNlcjM5Mjc2MjE=", "organizations_url": "https://api.github.com/users/yuukicammy/orgs", "received_events_url": "https://api.github.com/users/yuukicammy/received_events", "repos_url": "https://api.github.com/users/yuukicammy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yuukicammy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuukicammy/subscriptions", "type": "User", "url": "https://api.github.com/users/yuukicammy" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Sorry, the local test passed because it was inadvertently testing the main branch. I am currently fixing where the test failed.", "- I have fixed the bug and addressed the above two points.\r\n- I have tested locally and confirmed ...
2023-04-30T13:23:01Z
2023-05-22T08:12:39Z
2023-05-22T08:05:31Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5810.diff", "html_url": "https://github.com/huggingface/datasets/pull/5810", "merged_at": "2023-05-22T08:05:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/5810.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5810" }
# Overview I've added an argument`fn_kwargs` for map and filter methods of `IterableDataset` and `IterableDatasetDict` classes. # Details Currently, the map and filter methods of some classes related to `IterableDataset` do not allow specifing the arguments passed to the function. This pull request adds `fn_kwargs` to pass arguments to the mapping function. This allows users to preprocess data more flexibly. Added `fn_kwargs` to the following classes and methods (description of the argument is also added). 1. class `FilteredExamplesIterable` 2. method `filter` of class `IterableDataset` 3. method `map` of class `IterableDatasetDict` 4. method `filter` of class `IterableDatasetDict` # Example of changes Here's an example of how to use the new functionality: ```python from datasets import IterableDatasetDict def preprocess_function(example, a=None, b=None): # do something return example dataset = IterableDatasetDict(...) dataset = dataset.map(preprocess_function, fn_kwargs={"a": 1, "b": 2}) ``` # Related Issues This pull request is related to the following issue: https://github.com/huggingface/datasets/issues/3444 . # Testing I have added unit tests to test the new functionality. In test_iterable_dataset.py - Added `test_filtered_examples_iterable_with_fn_kwargs` for [1](#details). - Added `test_iterable_dataset_filter` for [2](#details). - Added `test_iterable_dataset_map_with_fn_kwargs`. This is not a newly added feature, but was added because it was not tested. In test_dataset_dict.py - Added `_create_dummy_iterable_dataset` for [3](#details) and [4](#details). - Added `_create_dummy_iterable_dataset_dict` for [3](#details) and [4](#details). - Added `test_iterable_map` for [3](#details). - Added `test_iterable_filter` for [4](#details). Note that, there is no test for `IterableDatasetDict` at the current main branch. I thought about writing tests for `IterableDatasetDict` in a new file, but I decided to add them in the test file for `DatasetDict` (test_dataset_dict.py). # Checklist - [x] Format the code. - [x] Added tests. - [x] Passed tests locally.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5810/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5810/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1716
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1716/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1716/comments
https://api.github.com/repos/huggingface/datasets/issues/1716/events
https://github.com/huggingface/datasets/pull/1716
782,819,006
MDExOlB1bGxSZXF1ZXN0NTUyMjgzNzE5
1,716
Add Hatexplain Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/48222101?v=4", "events_url": "https://api.github.com/users/kushal2000/events{/privacy}", "followers_url": "https://api.github.com/users/kushal2000/followers", "following_url": "https://api.github.com/users/kushal2000/following{/other_user}", "gists_url": "https://api.github.com/users/kushal2000/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kushal2000", "id": 48222101, "login": "kushal2000", "node_id": "MDQ6VXNlcjQ4MjIyMTAx", "organizations_url": "https://api.github.com/users/kushal2000/orgs", "received_events_url": "https://api.github.com/users/kushal2000/received_events", "repos_url": "https://api.github.com/users/kushal2000/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kushal2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kushal2000/subscriptions", "type": "User", "url": "https://api.github.com/users/kushal2000" }
[]
closed
false
null
[]
null
[]
2021-01-10T13:30:01Z
2021-01-18T14:21:42Z
2021-01-18T14:21:42Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1716.diff", "html_url": "https://github.com/huggingface/datasets/pull/1716", "merged_at": "2021-01-18T14:21:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/1716.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1716" }
Adding Hatexplain - the first benchmark hate speech dataset covering multiple aspects of the issue
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1716/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1716/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1541
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1541/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1541/comments
https://api.github.com/repos/huggingface/datasets/issues/1541/events
https://github.com/huggingface/datasets/issues/1541
765,430,586
MDU6SXNzdWU3NjU0MzA1ODY=
1,541
connection issue while downloading data
{ "avatar_url": "https://avatars.githubusercontent.com/u/73364383?v=4", "events_url": "https://api.github.com/users/rabeehkarimimahabadi/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehkarimimahabadi/followers", "following_url": "https://api.github.com/users/rabeehkarimimahabadi/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehkarimimahabadi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehkarimimahabadi", "id": 73364383, "login": "rabeehkarimimahabadi", "node_id": "MDQ6VXNlcjczMzY0Mzgz", "organizations_url": "https://api.github.com/users/rabeehkarimimahabadi/orgs", "received_events_url": "https://api.github.com/users/rabeehkarimimahabadi/received_events", "repos_url": "https://api.github.com/users/rabeehkarimimahabadi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehkarimimahabadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehkarimimahabadi/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehkarimimahabadi" }
[]
closed
false
null
[]
null
[ "could you tell me how I can avoid download, by pre-downloading the data first, put them in a folder so the code does not try to redownload? could you tell me the path to put the downloaded data, and how to do it? thanks\r\n@lhoestq ", "Does your instance have an internet connection ?\r\n\r\nIf you don't have an ...
2020-12-13T14:27:00Z
2022-10-05T12:33:29Z
2022-10-05T12:33:29Z
NONE
null
null
null
Hi I am running my codes on google cloud, and I am getting this error resulting in the failure of the codes when trying to download the data, could you assist me to solve this? also as a temporary solution, could you tell me how I can increase the number of retries and timeout to at least let the models run for now. thanks ``` Traceback (most recent call last): File "finetune_t5_trainer.py", line 361, in <module> main() File "finetune_t5_trainer.py", line 269, in main add_prefix=False if training_args.train_adapters else True) File "/workdir/seq2seq/data/tasks.py", line 70, in get_dataset dataset = self.load_dataset(split=split) File "/workdir/seq2seq/data/tasks.py", line 306, in load_dataset return datasets.load_dataset('glue', 'cola', split=split) File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 263, in prepare_module head_hf_s3(path, filename=name, dataset=dataset) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 200, in head_hf_s3 return http_head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset)) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 403, in http_head url, proxies=proxies, headers=headers, cookies=cookies, allow_redirects=allow_redirects, timeout=timeout File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 104, in head return request('head', url, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 542, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/sessions.py", line 655, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.6/dist-packages/requests/adapters.py", line 504, in send raise ConnectTimeout(e, request=request) requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/glue/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f47db511e80>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)')) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1541/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1541/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4897
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4897/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4897/comments
https://api.github.com/repos/huggingface/datasets/issues/4897/events
https://github.com/huggingface/datasets/issues/4897
1,351,784,727
I_kwDODunzps5QkpkX
4,897
datasets generate large arrow file
{ "avatar_url": "https://avatars.githubusercontent.com/u/18533904?v=4", "events_url": "https://api.github.com/users/osayes/events{/privacy}", "followers_url": "https://api.github.com/users/osayes/followers", "following_url": "https://api.github.com/users/osayes/following{/other_user}", "gists_url": "https://api.github.com/users/osayes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osayes", "id": 18533904, "login": "osayes", "node_id": "MDQ6VXNlcjE4NTMzOTA0", "organizations_url": "https://api.github.com/users/osayes/orgs", "received_events_url": "https://api.github.com/users/osayes/received_events", "repos_url": "https://api.github.com/users/osayes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osayes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osayes/subscriptions", "type": "User", "url": "https://api.github.com/users/osayes" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi ! The cache files are the results of all the transforms you applied to the dataset using `map` for example.\r\nDid you run a transform that could potentially blow up the size of the dataset ?", "@lhoestq,\r\nI don't remember, but I can't imagine what kind of transform may generate data that grow over 200 time...
2022-08-26T05:51:16Z
2022-09-18T05:07:52Z
2022-09-18T05:07:52Z
NONE
null
null
null
Checking the large file in disk, and found the large cache file in the cifar10 data directory: ![image](https://user-images.githubusercontent.com/18533904/186830449-ba96cdeb-0fe8-4543-994d-2abe7145933f.png) As we know, the size of cifar10 dataset is ~130MB, but the cache file has almost 30GB size, there may be some problems here.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4897/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4897/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4308
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4308/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4308/comments
https://api.github.com/repos/huggingface/datasets/issues/4308/events
https://github.com/huggingface/datasets/pull/4308
1,231,217,783
PR_kwDODunzps43lHdP
4,308
Remove unused multiprocessing args from test CLI
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-05-10T14:02:15Z
2022-05-11T12:58:25Z
2022-05-11T12:50:43Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4308.diff", "html_url": "https://github.com/huggingface/datasets/pull/4308", "merged_at": "2022-05-11T12:50:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/4308.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4308" }
Multiprocessing is not used in the test CLI.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4308/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4308/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2526
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2526/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2526/comments
https://api.github.com/repos/huggingface/datasets/issues/2526/events
https://github.com/huggingface/datasets/issues/2526
925,929,228
MDU6SXNzdWU5MjU5MjkyMjg=
2,526
Add COCO datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc",...
open
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/merveenoyan", "id": 53175384, "login": "merveenoyan", "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "repos_url": "https://api.github.com/users/merveenoyan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "type": "User", "url": "https://api.github.com/users/merveenoyan" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}"...
null
[ "I'm currently adding it, the entire dataset is quite big around 30 GB so I add splits separately. You can take a look here https://huggingface.co/datasets/merve/coco", "I talked to @lhoestq and it's best if I download this dataset through TensorFlow datasets instead, so I'll be implementing that one really soon....
2021-06-21T07:48:32Z
2023-06-22T14:12:18Z
null
CONTRIBUTOR
null
null
null
## Adding a Dataset - **Name:** COCO - **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset. - **Paper + website:** https://cocodataset.org/#home - **Data:** https://cocodataset.org/#download - **Motivation:** It would be great to have COCO available in HuggingFace datasets, as we are moving beyond just text. COCO includes multi-modalities (images + text), as well as a huge amount of images annotated with objects, segmentation masks, keypoints etc., on which models like DETR (which I recently added to HuggingFace Transformers) are trained. Currently, one needs to download everything from the website and place it in a local folder, but it would be much easier if we can directly access it through the datasets API. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/2526/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2526/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3920
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3920/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3920/comments
https://api.github.com/repos/huggingface/datasets/issues/3920/events
https://github.com/huggingface/datasets/issues/3920
1,169,532,807
I_kwDODunzps5FtaeH
3,920
'datasets.features' is not a package
{ "avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4", "events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}", "followers_url": "https://api.github.com/users/Arij-Aladel/followers", "following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}", "gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Arij-Aladel", "id": 68355048, "login": "Arij-Aladel", "node_id": "MDQ6VXNlcjY4MzU1MDQ4", "organizations_url": "https://api.github.com/users/Arij-Aladel/orgs", "received_events_url": "https://api.github.com/users/Arij-Aladel/received_events", "repos_url": "https://api.github.com/users/Arij-Aladel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions", "type": "User", "url": "https://api.github.com/users/Arij-Aladel" }
[]
closed
false
null
[]
null
[ "Hi @Arij-Aladel,\r\n\r\nYou are using a very old version of our library `datasets`: 1.8.0\r\nCurrent version is 2.0.0 (and the previous one was 1.18.4)\r\n\r\nPlease, try to update `datasets` library and check if the problem persists:\r\n```shell\r\n/env/bin/pip install -U datasets", "The problem I can no I have...
2022-03-15T11:14:23Z
2022-03-16T09:17:12Z
2022-03-16T09:17:12Z
NONE
null
null
null
@albertvillanova python 3.9 os: ubuntu 20.04 In conda environment torch installed by ```/env/bin/pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html``` datasets package is installed by ``` /env/bin/pip install datasets==1.8.0 ``` During runing the code I have this error ``` [6]<stderr>: File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class [6]<stderr>: return super().find_class(mod_name, name) [6]<stderr>:ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package ``` precisely this error appears when torch.load('data_file.pt') ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 607, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 882, in _load result = unpickler.load() File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class return super().find_class(mod_name, name) ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package ``` Why I am getting this error?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3920/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3920/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2276/comments
https://api.github.com/repos/huggingface/datasets/issues/2276/events
https://github.com/huggingface/datasets/issues/2276
870,010,511
MDU6SXNzdWU4NzAwMTA1MTE=
2,276
concatenate_datasets loads all the data into memory
{ "avatar_url": "https://avatars.githubusercontent.com/u/7063207?v=4", "events_url": "https://api.github.com/users/chbensch/events{/privacy}", "followers_url": "https://api.github.com/users/chbensch/followers", "following_url": "https://api.github.com/users/chbensch/following{/other_user}", "gists_url": "https://api.github.com/users/chbensch/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/chbensch", "id": 7063207, "login": "chbensch", "node_id": "MDQ6VXNlcjcwNjMyMDc=", "organizations_url": "https://api.github.com/users/chbensch/orgs", "received_events_url": "https://api.github.com/users/chbensch/received_events", "repos_url": "https://api.github.com/users/chbensch/repos", "site_admin": false, "starred_url": "https://api.github.com/users/chbensch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chbensch/subscriptions", "type": "User", "url": "https://api.github.com/users/chbensch" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists...
null
[ "Therefore, when I try to concatenate larger datasets (5x 35GB data sets) I also get an out of memory error, since over 90GB of swap space was used at the time of the crash:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nMemoryError Traceba...
2021-04-28T14:27:21Z
2021-05-03T08:41:55Z
2021-05-03T08:41:55Z
NONE
null
null
null
## Describe the bug When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk. Interestingly, this happens when trying to save the new dataset to disk or concatenating it again. ![image](https://user-images.githubusercontent.com/7063207/116420321-2b21b480-a83e-11eb-9006-8f6ca729fb6f.png) ## Steps to reproduce the bug ```python from datasets import concatenate_datasets, load_from_disk test_sampled_pro = load_from_disk("test_sampled_pro") val_sampled_pro = load_from_disk("val_sampled_pro") big_set = concatenate_datasets([test_sampled_pro, val_sampled_pro]) # Loaded to memory big_set.save_to_disk("big_set") # Loaded to memory big_set = concatenate_datasets([big_set, val_sampled_pro]) ``` ## Expected results The data should be loaded into memory in batches and then saved directly to disk. ## Actual results The entire data set is loaded into the memory and then saved to the hard disk. ## Versions Paste the output of the following code: ```python - Datasets: 1.6.1 - Python: 3.8.8 (default, Apr 13 2021, 19:58:26) [GCC 7.3.0] - Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.10 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2276/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2276/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4369
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4369/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4369/comments
https://api.github.com/repos/huggingface/datasets/issues/4369/events
https://github.com/huggingface/datasets/pull/4369
1,240,245,642
PR_kwDODunzps44CpCe
4,369
Add redirect to dataset script in the repo structure page
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-05-18T17:05:33Z
2022-05-19T08:19:01Z
2022-05-19T08:10:51Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4369.diff", "html_url": "https://github.com/huggingface/datasets/pull/4369", "merged_at": "2022-05-19T08:10:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/4369.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4369" }
Following https://github.com/huggingface/hub-docs/pull/146 I added a redirection to the dataset scripts documentation in the repository structure page.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4369/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4369/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3881
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3881/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3881/comments
https://api.github.com/repos/huggingface/datasets/issues/3881/events
https://github.com/huggingface/datasets/issues/3881
1,164,452,005
I_kwDODunzps5FaCCl
3,881
How to use Image folder
{ "avatar_url": "https://avatars.githubusercontent.com/u/45640029?v=4", "events_url": "https://api.github.com/users/INF800/events{/privacy}", "followers_url": "https://api.github.com/users/INF800/followers", "following_url": "https://api.github.com/users/INF800/following{/other_user}", "gists_url": "https://api.github.com/users/INF800/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/INF800", "id": 45640029, "login": "INF800", "node_id": "MDQ6VXNlcjQ1NjQwMDI5", "organizations_url": "https://api.github.com/users/INF800/orgs", "received_events_url": "https://api.github.com/users/INF800/received_events", "repos_url": "https://api.github.com/users/INF800/repos", "site_admin": false, "starred_url": "https://api.github.com/users/INF800/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/INF800/subscriptions", "type": "User", "url": "https://api.github.com/users/INF800" }
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
closed
false
null
[]
null
[ "Even this from docs throw same error\r\n```\r\ndataset = load_dataset(\"imagefolder\", data_files=\"https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip\", split=\"train\")\r\n\r\n```", "Hi @INF800,\r\n\r\nPlease note that the `imagefolder` feature enhanc...
2022-03-09T21:18:52Z
2022-03-11T08:45:52Z
2022-03-11T08:45:52Z
NONE
null
null
null
Ran this code ``` load_dataset("imagefolder", data_dir="./my-dataset") ``` `https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /tmp/ipykernel_33/1648737256.py in <module> ----> 1 load_dataset("imagefolder", data_dir="./my-dataset") /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs) 1684 revision=revision, 1685 use_auth_token=use_auth_token, -> 1686 **config_kwargs, 1687 ) 1688 /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs) 1511 download_config.use_auth_token = use_auth_token 1512 dataset_module = dataset_module_factory( -> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files 1514 ) 1515 /opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs) 1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " 1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" -> 1202 ) from None 1203 raise e1 from None 1204 else: FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3881/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3881/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2078
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2078/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2078/comments
https://api.github.com/repos/huggingface/datasets/issues/2078/events
https://github.com/huggingface/datasets/issues/2078
834,694,819
MDU6SXNzdWU4MzQ2OTQ4MTk=
2,078
MemoryError when computing WER metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/5707233?v=4", "events_url": "https://api.github.com/users/diego-fustes/events{/privacy}", "followers_url": "https://api.github.com/users/diego-fustes/followers", "following_url": "https://api.github.com/users/diego-fustes/following{/other_user}", "gists_url": "https://api.github.com/users/diego-fustes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/diego-fustes", "id": 5707233, "login": "diego-fustes", "node_id": "MDQ6VXNlcjU3MDcyMzM=", "organizations_url": "https://api.github.com/users/diego-fustes/orgs", "received_events_url": "https://api.github.com/users/diego-fustes/received_events", "repos_url": "https://api.github.com/users/diego-fustes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/diego-fustes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/diego-fustes/subscriptions", "type": "User", "url": "https://api.github.com/users/diego-fustes" }
[ { "color": "25b21e", "default": false, "description": "A bug in a metric script", "id": 2067393914, "name": "metric bug", "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Hi ! Thanks for reporting.\r\nWe're indeed using `jiwer` to compute the WER.\r\n\r\nMaybe instead of calling `jiwer.wer` once for all the preditions/references we can compute the WER iteratively to avoid memory issues ? I'm not too familial with `jiwer` but this must be possible.\r\n\r\nCurrently the code to compu...
2021-03-18T11:30:05Z
2021-05-01T08:31:49Z
2021-04-06T07:20:43Z
NONE
null
null
null
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File "/home/diego/IpGlobal/wav2vec/test_wav2vec.py", line 51, in <module> print(wer.compute(predictions=result["predicted"], references=result["target"])) File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/datasets/metric.py", line 403, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/diego/.cache/huggingface/modules/datasets_modules/metrics/wer/73b2d32b723b7fb8f204d785c00980ae4d937f12a65466f8fdf78706e2951281/wer.py", line 94, in _compute return wer(references, predictions) File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 81, in wer truth, hypothesis, truth_transform, hypothesis_transform, **kwargs File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 192, in compute_measures H, S, D, I = _get_operation_counts(truth, hypothesis) File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 273, in _get_operation_counts editops = Levenshtein.editops(source_string, destination_string) MemoryError` My system has more than 10GB of available RAM. Looking at the code, I think that it could be related to the way jiwer does the calculation, as it is pasting all the sentences in a single string before calling Levenshtein editops function.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2078/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2078/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6088
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6088/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6088/comments
https://api.github.com/repos/huggingface/datasets/issues/6088/events
https://github.com/huggingface/datasets/issues/6088
1,825,665,235
I_kwDODunzps5s0XDT
6,088
Loading local data files initiates web requests
{ "avatar_url": "https://avatars.githubusercontent.com/u/23375707?v=4", "events_url": "https://api.github.com/users/lytning98/events{/privacy}", "followers_url": "https://api.github.com/users/lytning98/followers", "following_url": "https://api.github.com/users/lytning98/following{/other_user}", "gists_url": "https://api.github.com/users/lytning98/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lytning98", "id": 23375707, "login": "lytning98", "node_id": "MDQ6VXNlcjIzMzc1NzA3", "organizations_url": "https://api.github.com/users/lytning98/orgs", "received_events_url": "https://api.github.com/users/lytning98/received_events", "repos_url": "https://api.github.com/users/lytning98/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lytning98/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lytning98/subscriptions", "type": "User", "url": "https://api.github.com/users/lytning98" }
[]
closed
false
null
[]
null
[]
2023-07-28T04:06:26Z
2023-07-28T05:02:22Z
2023-07-28T05:02:22Z
NONE
null
null
null
As documented in the [official docs](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/loading_methods#datasets.load_dataset.example-2), I tried to load datasets from local files by ```python # Load a JSON file from datasets import load_dataset ds = load_dataset('json', data_files='path/to/local/my_dataset.json') ``` But this failed on a web request because I'm executing the script on a machine without Internet access. Stacktrace shows ``` in PackagedDatasetModuleFactory.__init__(self, name, data_dir, data_files, download_config, download_mode) 940 self.download_config = download_config 941 self.download_mode = download_mode --> 942 increase_load_count(name, resource_type="dataset") ``` I've read from the source code that this can be fixed by setting environment variable to run in offline mode. I'm just wondering that is this an expected behaviour that even loading a LOCAL JSON file requires Internet access by default? And what's the point of requesting to `increase_load_count` on some server when loading just LOCAL data files?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6088/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6088/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6164
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6164/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6164/comments
https://api.github.com/repos/huggingface/datasets/issues/6164/events
https://github.com/huggingface/datasets/pull/6164
1,859,560,007
PR_kwDODunzps5YZZAJ
6,164
Fix: Missing a MetadataConfigs init when the repo has a `datasets_info.json` but no README
{ "avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4", "events_url": "https://api.github.com/users/clefourrier/events{/privacy}", "followers_url": "https://api.github.com/users/clefourrier/followers", "following_url": "https://api.github.com/users/clefourrier/following{/other_user}", "gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/clefourrier", "id": 22726840, "login": "clefourrier", "node_id": "MDQ6VXNlcjIyNzI2ODQw", "organizations_url": "https://api.github.com/users/clefourrier/orgs", "received_events_url": "https://api.github.com/users/clefourrier/received_events", "repos_url": "https://api.github.com/users/clefourrier/repos", "site_admin": false, "starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions", "type": "User", "url": "https://api.github.com/users/clefourrier" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-08-21T14:57:54Z
2023-08-21T16:27:05Z
2023-08-21T16:18:26Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6164.diff", "html_url": "https://github.com/huggingface/datasets/pull/6164", "merged_at": "2023-08-21T16:18:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/6164.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6164" }
When I try to push to an arrow repo (can provide the link on Slack), it uploads the files but fails to update the metadata, with ``` File "app.py", line 123, in add_new_eval eval_results[level].push_to_hub(my_repo, token=TOKEN, split=SPLIT) File "blabla_my_env_path/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 5501, in push_to_hub if not metadata_configs: UnboundLocalError: local variable 'metadata_configs' referenced before assignment ``` This fixes it.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6164/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6164/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4819
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4819/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4819/comments
https://api.github.com/repos/huggingface/datasets/issues/4819/events
https://github.com/huggingface/datasets/pull/4819
1,335,064,449
PR_kwDODunzps48-xc6
4,819
Add missing language tags to resources
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-10T19:06:42Z
2022-08-10T19:45:49Z
2022-08-10T19:32:15Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4819.diff", "html_url": "https://github.com/huggingface/datasets/pull/4819", "merged_at": "2022-08-10T19:32:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/4819.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4819" }
Add missing language tags to resources, required by existing datasets on GitHub.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4819/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4819/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5220
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5220/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5220/comments
https://api.github.com/repos/huggingface/datasets/issues/5220/events
https://github.com/huggingface/datasets/issues/5220
1,441,664,377
I_kwDODunzps5V7g15
5,220
Implicit type conversion of lists in to_pandas
{ "avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4", "events_url": "https://api.github.com/users/sanderland/events{/privacy}", "followers_url": "https://api.github.com/users/sanderland/followers", "following_url": "https://api.github.com/users/sanderland/following{/other_user}", "gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sanderland", "id": 48946947, "login": "sanderland", "node_id": "MDQ6VXNlcjQ4OTQ2OTQ3", "organizations_url": "https://api.github.com/users/sanderland/orgs", "received_events_url": "https://api.github.com/users/sanderland/received_events", "repos_url": "https://api.github.com/users/sanderland/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanderland/subscriptions", "type": "User", "url": "https://api.github.com/users/sanderland" }
[]
closed
false
null
[]
null
[ "I think this behavior comes from PyArrow:\r\n```python\r\nimport pyarrow as pa\r\nt = pa.table({\"a\": [[0]]})\r\nt.to_pandas().a.values[0]\r\n# array([0])\r\n```\r\n\r\nI believe this has to do with zero-copy: you can get a pandas DataFrame without copying the buffers from arrow, and therefore end up with numpy a...
2022-11-09T08:40:18Z
2022-11-10T16:12:26Z
2022-11-10T16:12:26Z
CONTRIBUTOR
null
null
null
### Describe the bug ``` ds = Dataset.from_list([{'a':[1,2,3]}]) ds.to_pandas().a.values[0] ``` Results in `array([1, 2, 3])` -- a rather unexpected conversion of types which made downstream tools expecting lists not happy. ### Steps to reproduce the bug See snippet ### Expected behavior Keep the original type ### Environment info datasets 2.6.1 python 3.8.10
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5220/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5220/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2429
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2429/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2429/comments
https://api.github.com/repos/huggingface/datasets/issues/2429/events
https://github.com/huggingface/datasets/pull/2429
907,321,665
MDExOlB1bGxSZXF1ZXN0NjU4MTg2ODc0
2,429
Rename QuestionAnswering template to QuestionAnsweringExtractive
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[]
closed
false
null
[]
null
[ "> I like having \"extractive\" in the name to make things explicit. However this creates an inconsistency with transformers.\r\n> \r\n> See\r\n> https://huggingface.co/transformers/task_summary.html#extractive-question-answering\r\n> \r\n> But this is minor IMO and I'm ok with this renaming\r\n\r\nyes i chose this...
2021-05-31T10:04:42Z
2021-05-31T15:57:26Z
2021-05-31T15:57:24Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2429.diff", "html_url": "https://github.com/huggingface/datasets/pull/2429", "merged_at": "2021-05-31T15:57:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/2429.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2429" }
Following the discussion with @thomwolf in #2255, this PR renames the QA template to distinguish extractive vs abstractive QA. The abstractive template will be added in a future PR.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2429/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2429/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6221
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6221/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6221/comments
https://api.github.com/repos/huggingface/datasets/issues/6221/events
https://github.com/huggingface/datasets/issues/6221
1,884,324,631
I_kwDODunzps5wUIMX
6,221
Support saving datasets with custom formatting
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
open
false
null
[]
null
[ "Not a fan of pickling this sort of stuff either.\r\nNote that users can also share the code in their dataset documentation." ]
2023-09-06T16:03:32Z
2023-09-06T18:32:07Z
null
CONTRIBUTOR
null
null
null
Requested in https://discuss.huggingface.co/t/using-set-transform-on-a-dataset-leads-to-an-exception/53036. I am not sure if supporting this is the best idea for the following reasons: >For this to work, we would have to pickle a custom transform, which means the transform and the objects it references need to be serializable. Also, deserializing these bytes would make `load_from_disk` unsafe, so I'm not sure this is a good idea. @lhoestq WDYT?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6221/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6221/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4999
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4999/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4999/comments
https://api.github.com/repos/huggingface/datasets/issues/4999/events
https://github.com/huggingface/datasets/pull/4999
1,379,610,030
PR_kwDODunzps4_SQxL
4,999
Add EmptyDatasetError
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-09-20T15:28:05Z
2022-09-21T12:23:43Z
2022-09-21T12:21:24Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4999.diff", "html_url": "https://github.com/huggingface/datasets/pull/4999", "merged_at": "2022-09-21T12:21:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/4999.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4999" }
examples: from the hub: ```python Traceback (most recent call last): File "playground/ttest.py", line 3, in <module> print(load_dataset("lhoestq/empty")) File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1686, in load_dataset **config_kwargs, File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1458, in load_dataset_builder data_files=data_files, File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1171, in dataset_module_factory raise e1 from None File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1162, in dataset_module_factory download_mode=download_mode, File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 760, in get_module else get_data_patterns_in_dataset_repository(hfh_dataset_info, self.data_dir) File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/data_files.py", line 678, in get_data_patterns_in_dataset_repository ) from None datasets.data_files.EmptyDatasetError: The dataset repository at 'lhoestq/empty' doesn't contain any data file. ``` from local directory: ```python Traceback (most recent call last): File "playground/ttest.py", line 3, in <module> print(load_dataset("playground/empty")) File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1686, in load_dataset **config_kwargs, File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1458, in load_dataset_builder data_files=data_files, File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1107, in dataset_module_factory path, data_dir=data_dir, data_files=data_files, download_mode=download_mode File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 625, in get_module else get_data_patterns_locally(base_path) File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/data_files.py", line 460, in get_data_patterns_locally raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data file") from None datasets.data_files.EmptyDatasetError: The directory at playground/empty doesn't contain any data file ``` Close https://github.com/huggingface/datasets/issues/4995
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4999/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4999/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4753
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4753/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4753/comments
https://api.github.com/repos/huggingface/datasets/issues/4753/events
https://github.com/huggingface/datasets/pull/4753
1,319,571,745
PR_kwDODunzps48Ll8G
4,753
Add `language_bcp47` tag
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-07-27T13:31:16Z
2022-07-27T14:50:03Z
2022-07-27T14:37:56Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4753.diff", "html_url": "https://github.com/huggingface/datasets/pull/4753", "merged_at": "2022-07-27T14:37:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/4753.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4753" }
Following (internal) https://github.com/huggingface/moon-landing/pull/3509, we need to move the bcp47 tags to `language_bcp47` and keep the `language` tag for iso 639 1-2-3 codes. In particular I made sure that all the tags in `languages` are not longer than 3 characters. I moved the rest to `language_bcp47` and fixed some of them. After this PR is merged I think we can simplify the language validation from the DatasetMetadata class (and keep it bare-bone just for the tagging app) PS: the CI is failing because of missing content in dataset cards that are unrelated to this PR
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4753/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4753/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3567
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3567/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3567/comments
https://api.github.com/repos/huggingface/datasets/issues/3567/events
https://github.com/huggingface/datasets/pull/3567
1,100,296,696
PR_kwDODunzps4w2xDl
3,567
Fix push to hub to allow individual split push
{ "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomasw21", "id": 24695242, "login": "thomasw21", "node_id": "MDQ6VXNlcjI0Njk1MjQy", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "repos_url": "https://api.github.com/users/thomasw21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "type": "User", "url": "https://api.github.com/users/thomasw21" }
[]
closed
false
null
[]
null
[ "This has been addressed in https://github.com/huggingface/datasets/pull/4415. Closing." ]
2022-01-12T12:42:58Z
2023-09-24T09:54:19Z
2022-07-27T12:11:11Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3567.diff", "html_url": "https://github.com/huggingface/datasets/pull/3567", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3567.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3567" }
# Description of the issue If one decides to push a split on a datasets repo, he uploads the dataset and overrides the config. However previous config splits end up being lost despite still having the dataset necessary. The new flow is the following: - query the old config from the repo - update into a new config (add/overwrite new split for example) - push the new config # Side fix - `repo_id` in HfFileSystem was wrongly typed. - I've added `indent=2` as it becomes much easier to read now.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3567/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3567/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4380
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4380/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4380/comments
https://api.github.com/repos/huggingface/datasets/issues/4380/events
https://github.com/huggingface/datasets/pull/4380
1,243,183,054
PR_kwDODunzps44MUz0
4,380
Pin dill
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-05-20T13:54:19Z
2022-06-13T10:03:52Z
2022-05-20T16:33:04Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4380.diff", "html_url": "https://github.com/huggingface/datasets/pull/4380", "merged_at": "2022-05-20T16:33:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/4380.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4380" }
Hotfix #4379. CC: @sgugger
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/4380/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4380/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5379
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5379/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5379/comments
https://api.github.com/repos/huggingface/datasets/issues/5379/events
https://github.com/huggingface/datasets/pull/5379
1,504,010,639
PR_kwDODunzps5F1r2k
5,379
feat: depth estimation dataset guide.
{ "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sayakpaul", "id": 22957388, "login": "sayakpaul", "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "repos_url": "https://api.github.com/users/sayakpaul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "type": "User", "url": "https://api.github.com/users/sayakpaul" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", "gists_url": "https://api.github.com/users/sayakpaul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sayakpaul", "id": 22957388, "login": "sayakpaul", "node_id": "MDQ6VXNlcjIyOTU3Mzg4", "organizations_url": "https://api.github.com/users/sayakpaul/orgs", "received_events_url": "https://api.github.com/users/sayakpaul/received_events", "repos_url": "https://api.github.com/users/sayakpaul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sayakpaul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sayakpaul/subscriptions", "type": "User", "url": "https://api.github.com/users/sayakpaul" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/22957388?v=4", "events_url": "https://api.github.com/users/sayakpaul/events{/privacy}", "followers_url": "https://api.github.com/users/sayakpaul/followers", "following_url": "https://api.github.com/users/sayakpaul/following{/other_user}", ...
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the changes, looks good to me!", "@stevhliu I have pushed some quality improvements both in terms of code and content. Would you be able to re-review? ", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0...
2022-12-20T05:32:11Z
2023-01-13T12:30:31Z
2023-01-13T12:23:34Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5379.diff", "html_url": "https://github.com/huggingface/datasets/pull/5379", "merged_at": "2023-01-13T12:23:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/5379.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5379" }
This PR adds a guide for prepping datasets for depth estimation. PR to add documentation images is up here: https://huggingface.co/datasets/huggingface/documentation-images/discussions/22
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5379/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5379/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4886
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4886/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4886/comments
https://api.github.com/repos/huggingface/datasets/issues/4886/events
https://github.com/huggingface/datasets/issues/4886
1,349,285,569
I_kwDODunzps5QbHbB
4,886
Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid
{ "avatar_url": "https://avatars.githubusercontent.com/u/11850255?v=4", "events_url": "https://api.github.com/users/JeanKaddour/events{/privacy}", "followers_url": "https://api.github.com/users/JeanKaddour/followers", "following_url": "https://api.github.com/users/JeanKaddour/following{/other_user}", "gists_url": "https://api.github.com/users/JeanKaddour/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JeanKaddour", "id": 11850255, "login": "JeanKaddour", "node_id": "MDQ6VXNlcjExODUwMjU1", "organizations_url": "https://api.github.com/users/JeanKaddour/orgs", "received_events_url": "https://api.github.com/users/JeanKaddour/received_events", "repos_url": "https://api.github.com/users/JeanKaddour/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JeanKaddour/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JeanKaddour/subscriptions", "type": "User", "url": "https://api.github.com/users/JeanKaddour" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Hi! IIRC one of the files in this dataset is corrupted due to https://github.com/huggingface/datasets/pull/4081 (fixed now).\r\n\r\n@NielsRogge Could you please re-generate and re-push this dataset (or I can do it if you share the generation script)?", "Could you put something in place to catch these problems? ...
2022-08-24T11:24:21Z
2023-02-02T02:40:53Z
null
NONE
null
null
null
## Describe the bug Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('huggan/CelebA-HQ') ``` ## Expected results See https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing#scrollTo=N3ml_7f8kzDd ## Actual results ``` File "/home/jean/projects/cold_diffusion/celebA.py", line 4, in <module> dataset = load_dataset('huggan/CelebA-HQ') File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/load.py", line 1793, in load_dataset builder_instance.download_and_prepare( File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 1274, in _prepare_split for key, table in logging.tqdm( File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 67, in _generate_tables parquet_file = pq.ParquetFile(f) File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/pyarrow/parquet/__init__.py", line 286, in __init__ self.reader.open( File "pyarrow/_parquet.pyx", line 1227, in pyarrow._parquet.ParquetReader.open File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets-2.4.1.dev0 - Platform: Ubuntu 18.04 - Python version: 3.10 - PyArrow version: pyarrow 9.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4886/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4886/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2932
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2932/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2932/comments
https://api.github.com/repos/huggingface/datasets/issues/2932/events
https://github.com/huggingface/datasets/issues/2932
999,317,750
I_kwDODunzps47kGD2
2,932
Conda build fails
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists...
null
[ "Why 1.9 ?\r\n\r\nhttps://anaconda.org/HuggingFace/datasets currently says 1.11", "Alright I added 1.12.0 and 1.12.1 and fixed the conda build #2952 " ]
2021-09-17T12:49:22Z
2021-09-21T15:31:10Z
2021-09-21T15:31:10Z
MEMBER
null
null
null
## Describe the bug Current `datasets` version in conda is 1.9 instead of 1.12. The build of the conda package fails.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2932/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2932/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4424
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4424/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4424/comments
https://api.github.com/repos/huggingface/datasets/issues/4424/events
https://github.com/huggingface/datasets/pull/4424
1,253,542,488
PR_kwDODunzps44uZBD
4,424
Fix DuplicatedKeysError in timit_asr dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-05-31T08:47:45Z
2022-05-31T13:50:50Z
2022-05-31T13:42:31Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4424.diff", "html_url": "https://github.com/huggingface/datasets/pull/4424", "merged_at": "2022-05-31T13:42:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/4424.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4424" }
Fix #4422.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4424/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4424/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4236
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4236/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4236/comments
https://api.github.com/repos/huggingface/datasets/issues/4236/events
https://github.com/huggingface/datasets/pull/4236
1,217,115,691
PR_kwDODunzps423MOc
4,236
Replace data URL in big_patent dataset and support streaming
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I first uploaded the data files to the Hub: I think it is a good option because we have git lfs to track versions and changes. Moreover people will be able to make PRs to propose updates on the data files.\r\n- I would have preferred...
2022-04-27T10:01:13Z
2022-06-10T08:10:55Z
2022-05-02T18:21:15Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4236.diff", "html_url": "https://github.com/huggingface/datasets/pull/4236", "merged_at": "2022-05-02T18:21:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/4236.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4236" }
This PR replaces the Google Drive URL with our Hub one, once the data owners have approved to host their data on the Hub. Moreover, this PR makes the dataset streamable. Fix #4217.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4236/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4236/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3379
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3379/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3379/comments
https://api.github.com/repos/huggingface/datasets/issues/3379/events
https://github.com/huggingface/datasets/pull/3379
1,071,079,146
PR_kwDODunzps4vYr7K
3,379
iter_archive on zipfiles with better compression type check
{ "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mehdi2402", "id": 56029953, "login": "Mehdi2402", "node_id": "MDQ6VXNlcjU2MDI5OTUz", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "type": "User", "url": "https://api.github.com/users/Mehdi2402" }
[]
closed
false
null
[]
null
[ "Hello @lhoestq, thank you for your answer.\r\n\r\nI don't use pytest a lot so I think I might need some help on it :) but I tried some tests for `streaming_download_manager.py` only. I don't know how to test `download_manager.py` since we need to use local files.\r\n\r\n# Comments : \r\n* In **download_manager.py*...
2021-12-04T01:04:48Z
2023-01-24T13:00:19Z
2023-01-24T12:53:08Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3379.diff", "html_url": "https://github.com/huggingface/datasets/pull/3379", "merged_at": "2023-01-24T12:53:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/3379.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3379" }
Hello @lhoestq , thank you for your detailed answer on previous PR ! I made this new PR because I misused git on the previous one #3347. Related issue #3272. # Comments : * For extension check I used the `_get_extraction_protocol` function in **download_manager.py** with a slight change and called it `_get_extraction_protocol_local`: **I removed this part :** ```python elif path.endswith(".tar.gz") or path.endswith(".tgz"): raise NotImplementedError( f"Extraction protocol for TAR archives like '{urlpath}' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead." ) ``` **And also changed :** ```diff - extension = path.split(".")[-1] + extension = "tar" if path.endswith(".tar.gz") else path.split(".")[-1] ``` The reason for this is a compression like **.tar.gz** will be considered a **.gz** which is handled with **zipfile**, though **tar.gz** can only be opened using **tarfile**. Please tell me if there's anything to change. # Tasks : - [x] download_manager.py - [x] streaming_download_manager.py
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3379/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3379/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2497
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2497/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2497/comments
https://api.github.com/repos/huggingface/datasets/issues/2497/events
https://github.com/huggingface/datasets/pull/2497
920,250,382
MDExOlB1bGxSZXF1ZXN0NjY5NDI3OTU3
2,497
Use default cast for sliced list arrays if pyarrow >= 4
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
{ "closed_at": "2021-07-09T05:50:07Z", "closed_issues": 12, "created_at": "2021-05-31T16:13:06Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-07-08T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/5", "id": 6808903, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "open_issues": 0, "state": "closed", "title": "1.9", "updated_at": "2021-07-12T14:12:00Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/5" }
[ "I believe we don't use PyArrow >= 4.0.0 because of some segfault issues:\r\nhttps://github.com/huggingface/datasets/blob/1206ffbcd42dda415f6bfb3d5040708f50413c93/setup.py#L78\r\nCan you confirm @lhoestq ?", "@SBrandeis pyarrow version 4.0.1 has fixed that issue: #2489 😉 " ]
2021-06-14T10:02:47Z
2021-06-15T18:06:18Z
2021-06-14T14:24:37Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2497.diff", "html_url": "https://github.com/huggingface/datasets/pull/2497", "merged_at": "2021-06-14T14:24:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/2497.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2497" }
From pyarrow version 4, it is supported to cast sliced lists. This PR uses default pyarrow cast in Datasets to cast sliced list arrays if pyarrow version is >= 4. In relation with PR #2461 and #2490. cc: @lhoestq, @abhi1thakur, @SBrandeis
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2497/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2497/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3397
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3397/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3397/comments
https://api.github.com/repos/huggingface/datasets/issues/3397/events
https://github.com/huggingface/datasets/pull/3397
1,073,502,444
PR_kwDODunzps4vgh1U
3,397
add BNL newspapers
{ "avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4", "events_url": "https://api.github.com/users/davanstrien/events{/privacy}", "followers_url": "https://api.github.com/users/davanstrien/followers", "following_url": "https://api.github.com/users/davanstrien/following{/other_user}", "gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davanstrien", "id": 8995957, "login": "davanstrien", "node_id": "MDQ6VXNlcjg5OTU5NTc=", "organizations_url": "https://api.github.com/users/davanstrien/orgs", "received_events_url": "https://api.github.com/users/davanstrien/received_events", "repos_url": "https://api.github.com/users/davanstrien/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions", "type": "User", "url": "https://api.github.com/users/davanstrien" }
[]
closed
false
null
[]
null
[ "\r\n> Also, maybe calling the dataset as \"bnl_historical_newspapers\" and setting \"processed\" as one configuration name?\r\n\r\nThis sounds like a good idea but my only question around this is how easy it would be to use the same approach for processing the other newspaper collections [https://data.bnl.lu/data/...
2021-12-07T15:43:21Z
2022-01-17T18:35:34Z
2022-01-17T18:35:34Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3397.diff", "html_url": "https://github.com/huggingface/datasets/pull/3397", "merged_at": "2022-01-17T18:35:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/3397.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3397" }
This pull request adds the BNL's [processed newspaper collections](https://data.bnl.lu/data/historical-newspapers/) as a dataset. This is partly done to support BigScience see: https://github.com/bigscience-workshop/data_tooling/issues/192. The Datacard is more sparse than I would like but I plan to make a separate pull request to try and make this more complete at a later date. I had to manually add the `dummy_data` but I believe I've done this correctly (the tests pass locally).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3397/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3397/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3007
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3007/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3007/comments
https://api.github.com/repos/huggingface/datasets/issues/3007/events
https://github.com/huggingface/datasets/pull/3007
1,014,775,450
PR_kwDODunzps4sns-n
3,007
Correct a typo
{ "avatar_url": "https://avatars.githubusercontent.com/u/35955430?v=4", "events_url": "https://api.github.com/users/Yann21/events{/privacy}", "followers_url": "https://api.github.com/users/Yann21/followers", "following_url": "https://api.github.com/users/Yann21/following{/other_user}", "gists_url": "https://api.github.com/users/Yann21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Yann21", "id": 35955430, "login": "Yann21", "node_id": "MDQ6VXNlcjM1OTU1NDMw", "organizations_url": "https://api.github.com/users/Yann21/orgs", "received_events_url": "https://api.github.com/users/Yann21/received_events", "repos_url": "https://api.github.com/users/Yann21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Yann21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Yann21/subscriptions", "type": "User", "url": "https://api.github.com/users/Yann21" }
[]
closed
false
null
[]
null
[]
2021-10-04T06:15:47Z
2021-10-04T09:27:57Z
2021-10-04T09:27:57Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3007.diff", "html_url": "https://github.com/huggingface/datasets/pull/3007", "merged_at": "2021-10-04T09:27:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/3007.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3007" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3007/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3007/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6256
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6256/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6256/comments
https://api.github.com/repos/huggingface/datasets/issues/6256/events
https://github.com/huggingface/datasets/issues/6256
1,910,275,199
I_kwDODunzps5x3Hx_
6,256
load_dataset() function's cache_dir does not seems to work
{ "avatar_url": "https://avatars.githubusercontent.com/u/171831?v=4", "events_url": "https://api.github.com/users/andyzhu/events{/privacy}", "followers_url": "https://api.github.com/users/andyzhu/followers", "following_url": "https://api.github.com/users/andyzhu/following{/other_user}", "gists_url": "https://api.github.com/users/andyzhu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/andyzhu", "id": 171831, "login": "andyzhu", "node_id": "MDQ6VXNlcjE3MTgzMQ==", "organizations_url": "https://api.github.com/users/andyzhu/orgs", "received_events_url": "https://api.github.com/users/andyzhu/received_events", "repos_url": "https://api.github.com/users/andyzhu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/andyzhu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andyzhu/subscriptions", "type": "User", "url": "https://api.github.com/users/andyzhu" }
[]
open
false
null
[]
null
[ "Can you share the error message?\r\n\r\nAlso, it would help if you could check whether `huggingface_hub`'s download behaves the same:\r\n```python\r\nfrom huggingface_hub import snapshot_download\r\nsnapshot_download(\"trec\", repo_type=\"dataset\", cache_dir='/path/to/my/dir)\r\n```\r\n\r\nIn the next major relea...
2023-09-24T15:34:06Z
2023-09-27T13:40:45Z
null
NONE
null
null
null
### Describe the bug datasets version: 2.14.5 when trying to run the following command trec = load_dataset('trec', split='train[:1000]', cache_dir='/path/to/my/dir') I keep getting error saying the command does not have permission to the default cache directory on my macbook pro machine. It seems the cache_dir parameter cannot change the dataset saving directory from the default what ever explained in the https://huggingface.co/docs/datasets/cache does not seem to work ### Steps to reproduce the bug datasets version: 2.14.5 when trying to run the following command trec = load_dataset('trec', split='train[:1000]', cache_dir='/path/to/my/dir') I keep getting error saying the command does not have permission to the default cache directory on my macbook pro machine. It seems the cache_dir parameter cannot change the dataset saving directory from the default what ever explained in the https://huggingface.co/docs/datasets/cache does not seem to work ### Expected behavior the dataset should be saved to the cache_dir points to ### Environment info datasets version: 2.14.5 macos X: Ventura 13.4.1 (c)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6256/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6256/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3442
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3442/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3442/comments
https://api.github.com/repos/huggingface/datasets/issues/3442/events
https://github.com/huggingface/datasets/pull/3442
1,081,862,747
PR_kwDODunzps4v7oBZ
3,442
Extend text to support yielding lines, paragraphs or documents
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The parameter can also be named `split_by` with values \"line\", \"paragraph\" or \"document\" (no 's' at the end)", "> The parameter can also be named `split_by` with values \"line\", \"paragraph\" or \"document\" (no 's' at the end)\r\n\r\n@lhoestq @mariosasko I would avoid the term `split` in this context and...
2021-12-16T07:33:17Z
2021-12-20T16:59:10Z
2021-12-20T16:39:18Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3442.diff", "html_url": "https://github.com/huggingface/datasets/pull/3442", "merged_at": "2021-12-20T16:39:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/3442.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3442" }
Add `config.row` option to `text` module to allow yielding lines (default, current case), paragraphs or documents. Feel free to comment on the name of the config parameter `row`: - Currently, the docs state datasets are made of rows and columns - Other names I considered: `example`, `item`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3442/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3442/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3275/comments
https://api.github.com/repos/huggingface/datasets/issues/3275/events
https://github.com/huggingface/datasets/pull/3275
1,053,698,898
PR_kwDODunzps4uiN9t
3,275
Force data files extraction if download_mode='force_redownload'
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[]
2021-11-15T14:00:24Z
2021-11-15T14:45:23Z
2021-11-15T14:45:23Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3275.diff", "html_url": "https://github.com/huggingface/datasets/pull/3275", "merged_at": "2021-11-15T14:45:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/3275.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3275" }
Avoids weird issues when redownloading a dataset due to cached data not being fully updated. With this change, issues #3122 and https://github.com/huggingface/datasets/issues/2956 (not a fix, but a workaround) can be fixed as follows: ```python dset = load_dataset(..., download_mode="force_redownload") ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3275/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3275/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2134
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2134/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2134/comments
https://api.github.com/repos/huggingface/datasets/issues/2134/events
https://github.com/huggingface/datasets/issues/2134
843,242,849
MDU6SXNzdWU4NDMyNDI4NDk=
2,134
Saving large in-memory datasets with save_to_disk crashes because of pickling
{ "avatar_url": "https://avatars.githubusercontent.com/u/5815801?v=4", "events_url": "https://api.github.com/users/prokopCerny/events{/privacy}", "followers_url": "https://api.github.com/users/prokopCerny/followers", "following_url": "https://api.github.com/users/prokopCerny/following{/other_user}", "gists_url": "https://api.github.com/users/prokopCerny/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/prokopCerny", "id": 5815801, "login": "prokopCerny", "node_id": "MDQ6VXNlcjU4MTU4MDE=", "organizations_url": "https://api.github.com/users/prokopCerny/orgs", "received_events_url": "https://api.github.com/users/prokopCerny/received_events", "repos_url": "https://api.github.com/users/prokopCerny/repos", "site_admin": false, "starred_url": "https://api.github.com/users/prokopCerny/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prokopCerny/subscriptions", "type": "User", "url": "https://api.github.com/users/prokopCerny" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists...
null
[ "Hi !\r\nIndeed `save_to_disk` doesn't call pickle anymore. Though the `OverflowError` can still appear for in-memory datasets bigger than 4GB. This happens when doing this for example:\r\n```python\r\nimport pyarrow as pa\r\nimport pickle\r\n\r\narr = pa.array([0] * ((4 * 8 << 30) // 64))\r\ntable = pa.Table.from_...
2021-03-29T10:43:15Z
2021-05-03T17:59:21Z
2021-05-03T17:59:21Z
NONE
null
null
null
Using Datasets 1.5.0 on Python 3.7. Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so I decided to do these steps completely out of the datasets library. So my workflow is to do several .map() on datasets object, then for the operation which is faster in memory to extract the necessary columns from the dataset and then drop it whole, do the transformation in memory, and then create a fresh Dataset object using .from_dict() or other method. When I then try to call save_to_disk(path) on the dataset, it crashes because of pickling, which appears to be because of using old pickle protocol which doesn't support large files (over 4 GiB). ``` Traceback (most recent call last): File "./tokenize_and_chunkify_in_memory.py", line 80, in <module> main() File "./tokenize_and_chunkify_in_memory.py", line 75, in main tokenize_and_chunkify(config) File "./tokenize_and_chunkify_in_memory.py", line 60, in tokenize_and_chunkify contexts_dataset.save_to_disk(chunked_path) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 457, in save_to_disk self = pickle.loads(pickle.dumps(self)) OverflowError: cannot serialize a bytes object larger than 4 GiB ``` From what I've seen this issue may be possibly fixed, as the line `self = pickle.loads(pickle.dumps(self))` does not appear to be present in the current state of the repository. To save these datasets to disk, I've resorted to calling .map() over them with `function=None` and specifying the .arrow cache file, and then creating a new dataset using the .from_file() method, which I can then safely save to disk. Additional issue when working with these large in-memory datasets is when using multiprocessing, is again to do with pickling. I've tried to speed up the mapping with function=None by specifying num_proc to the available cpu count, and I again get issues with transferring the dataset, with the following traceback. I am not sure if I should open a separate issue for that. ``` Traceback (most recent call last): File "./tokenize_and_chunkify_in_memory.py", line 94, in <module> main() File "./tokenize_and_chunkify_in_memory.py", line 89, in main tokenize_and_chunkify(config) File "./tokenize_and_chunkify_in_memory.py", line 67, in tokenize_and_chunkify contexts_dataset.map(function=None, cache_file_name=str(output_dir_path / "tmp.arrow"), writer_batch_size=50000, num_proc=config.threads) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in map transformed_shards = [r.get() for r in results] File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in <listcomp> transformed_shards = [r.get() for r in results] File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 657, in get raise self._value File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 431, in _handle_tasks put(task) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py", line 209, in send self._send_bytes(_ForkingPickler.dumps(obj)) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py", line 54, in dumps cls(buf, protocol, *args, **kwds).dump(obj) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump StockPickler.dump(self, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 437, in dump self.save(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 662, in save_reduce save(state) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends save(x) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends save(x) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 732, in save_bytes self._write_large_bytes(BINBYTES + pack("<I", n), obj) struct.error: 'I' format requires 0 <= number <= 4294967295Traceback (most recent call last): File "./tokenize_and_chunkify_in_memory.py", line 94, in <module> main() File "./tokenize_and_chunkify_in_memory.py", line 89, in main tokenize_and_chunkify(config) File "./tokenize_and_chunkify_in_memory.py", line 67, in tokenize_and_chunkify contexts_dataset.map(function=None, cache_file_name=str(output_dir_path / "tmp.arrow"), writer_batch_size=50000, num_proc=config.threads) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in map transformed_shards = [r.get() for r in results] File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in <listcomp> transformed_shards = [r.get() for r in results] File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 657, in get raise self._value File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 431, in _handle_tasks put(task) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py", line 209, in send self._send_bytes(_ForkingPickler.dumps(obj)) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py", line 54, in dumps cls(buf, protocol, *args, **kwds).dump(obj) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump StockPickler.dump(self, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 437, in dump self.save(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 662, in save_reduce save(state) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends save(x) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends save(x) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 732, in save_bytes self._write_large_bytes(BINBYTES + pack("<I", n), obj) struct.error: 'I' format requires 0 <= number <= 4294967295 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2134/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2134/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2864/comments
https://api.github.com/repos/huggingface/datasets/issues/2864/events
https://github.com/huggingface/datasets/pull/2864
986,159,438
MDExOlB1bGxSZXF1ZXN0NzI1MzkyNjcw
2,864
Fix data URL in ToTTo dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
{ "closed_at": null, "closed_issues": 2, "created_at": "2021-07-21T15:34:56Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-08-30T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/8", "id": 6968069, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels", "node_id": "MI_kwDODunzps4AalMF", "number": 8, "open_issues": 4, "state": "open", "title": "1.12", "updated_at": "2021-10-13T10:26:33Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/8" }
[]
2021-09-02T05:25:08Z
2021-09-02T06:47:40Z
2021-09-02T06:47:40Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2864.diff", "html_url": "https://github.com/huggingface/datasets/pull/2864", "merged_at": "2021-09-02T06:47:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/2864.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2864" }
Data source host changed their data URL: google-research-datasets/ToTTo@cebeb43. Fix #2860.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2864/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2864/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1669
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1669/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1669/comments
https://api.github.com/repos/huggingface/datasets/issues/1669/events
https://github.com/huggingface/datasets/issues/1669
776,608,386
MDU6SXNzdWU3NzY2MDgzODY=
1,669
wiki_dpr dataset pre-processesing performance
{ "avatar_url": "https://avatars.githubusercontent.com/u/753898?v=4", "events_url": "https://api.github.com/users/dbarnhart/events{/privacy}", "followers_url": "https://api.github.com/users/dbarnhart/followers", "following_url": "https://api.github.com/users/dbarnhart/following{/other_user}", "gists_url": "https://api.github.com/users/dbarnhart/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dbarnhart", "id": 753898, "login": "dbarnhart", "node_id": "MDQ6VXNlcjc1Mzg5OA==", "organizations_url": "https://api.github.com/users/dbarnhart/orgs", "received_events_url": "https://api.github.com/users/dbarnhart/received_events", "repos_url": "https://api.github.com/users/dbarnhart/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dbarnhart/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dbarnhart/subscriptions", "type": "User", "url": "https://api.github.com/users/dbarnhart" }
[]
closed
false
null
[]
null
[ "Sorry, double posted." ]
2020-12-30T19:41:09Z
2020-12-30T19:42:25Z
2020-12-30T19:42:25Z
NONE
null
null
null
I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h). I won't repeat the concerns around multiprocessing as they are addressed in other issues (#786), but this is the first obvious thing to do. Using cython to speed up the text manipulation may be also help. Loading and processing a dataset of this size in under 15 minutes does not seem unreasonable on a modern multi-core machine. I have hit such targets myself on similar tasks. Would love to see this improve. The other issue is that it takes 3h to construct the FAISS index. If only we could use GPUs with HNSW, but we can't. My sharded GPU indexing code can build an IVF + PQ index in 10 minutes on 20 million vectors. Still, 3h seems slow even for the CPU. It looks like HF is adding only 1000 vectors at a time by default [2], whereas the faiss benchmarks adds 1 million vectors at a time (effectively) [3]. It's possible the runtime could be reduced with a larger batch. Also, it looks like project dependencies ultimately use OpenBLAS, but this is known to have issues when combined with OpenMP, which HNSW does [3]. A workaround is to set the environment variable `OMP_WAIT_POLICY=PASSIVE` via `os.environ` or similar. References: [1] https://github.com/huggingface/datasets/blob/master/datasets/wiki_dpr/wiki_dpr.py [2] https://github.com/huggingface/datasets/blob/master/src/datasets/search.py [3] https://github.com/facebookresearch/faiss/blob/master/benchs/bench_hnsw.py [4] https://github.com/facebookresearch/faiss/issues/422
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1669/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1669/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2837
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2837/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2837/comments
https://api.github.com/repos/huggingface/datasets/issues/2837/events
https://github.com/huggingface/datasets/issues/2837
979,298,297
MDU6SXNzdWU5NzkyOTgyOTc=
2,837
prepare_module issue when loading from read-only fs
{ "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hello, I opened #2887 to fix this." ]
2021-08-25T15:21:26Z
2021-10-05T17:58:22Z
2021-10-05T17:58:22Z
CONTRIBUTOR
null
null
null
## Describe the bug When we use prepare_module from a readonly file system, we create a FileLock using the `local_path`. This path is not necessarily writable. `lock_path = local_path + ".lock"` ## Steps to reproduce the bug Run `load_dataset` on a readonly python loader file. ```python ds = load_dataset( python_loader, data_files={"train": train_path, "test": test_path} ) ``` where `python_loader` is a path to a file located in a readonly folder. ## Expected results This should work I think? ## Actual results ```python return load_dataset( File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 711, in load_dataset module_path, hash, resolved_file_path = prepare_module( File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 465, in prepare_module with FileLock(lock_path): File "/usr/local/lib/python3.8/dist-packages/datasets/utils/filelock.py", line 314, in __enter__ self.acquire() File "/usr/local/lib/python3.8/dist-packages/datasets/utils/filelock.py", line 263, in acquire self._acquire() File "/usr/local/lib/python3.8/dist-packages/datasets/utils/filelock.py", line 378, in _acquire fd = os.open(self._lock_file, open_mode) OSError: [Errno 30] Read-only file system: 'YOUR_FILE.py.lock' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.7.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.8 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2837/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2837/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2760
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2760/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2760/comments
https://api.github.com/repos/huggingface/datasets/issues/2760/events
https://github.com/huggingface/datasets/issues/2760
961,372,667
MDU6SXNzdWU5NjEzNzI2Njc=
2,760
Add Nuswide dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/19774925?v=4", "events_url": "https://api.github.com/users/shivangibithel/events{/privacy}", "followers_url": "https://api.github.com/users/shivangibithel/followers", "following_url": "https://api.github.com/users/shivangibithel/following{/other_user}", "gists_url": "https://api.github.com/users/shivangibithel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shivangibithel", "id": 19774925, "login": "shivangibithel", "node_id": "MDQ6VXNlcjE5Nzc0OTI1", "organizations_url": "https://api.github.com/users/shivangibithel/orgs", "received_events_url": "https://api.github.com/users/shivangibithel/received_events", "repos_url": "https://api.github.com/users/shivangibithel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shivangibithel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shivangibithel/subscriptions", "type": "User", "url": "https://api.github.com/users/shivangibithel" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc",...
open
false
null
[]
null
[]
2021-08-05T03:00:41Z
2021-12-08T12:06:23Z
null
NONE
null
null
null
## Adding a Dataset - **Name:** *NUSWIDE* - **Description:** *[A Real-World Web Image Dataset from National University of Singapore](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/NUS-WIDE.html)* - **Paper:** *[here](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/nuswide-civr2009.pdf)* - **Data:** *[here](https://github.com/wenting-zhao/nuswide)* - **Motivation:** *This dataset is a benchmark in the Text Retrieval task.* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2760/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2760/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3553
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3553/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3553/comments
https://api.github.com/repos/huggingface/datasets/issues/3553/events
https://github.com/huggingface/datasets/issues/3553
1,097,252,275
I_kwDODunzps5BZr2z
3,553
set_format("np") no longer works for Image data
{ "avatar_url": "https://avatars.githubusercontent.com/u/5862228?v=4", "events_url": "https://api.github.com/users/cgarciae/events{/privacy}", "followers_url": "https://api.github.com/users/cgarciae/followers", "following_url": "https://api.github.com/users/cgarciae/following{/other_user}", "gists_url": "https://api.github.com/users/cgarciae/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cgarciae", "id": 5862228, "login": "cgarciae", "node_id": "MDQ6VXNlcjU4NjIyMjg=", "organizations_url": "https://api.github.com/users/cgarciae/orgs", "received_events_url": "https://api.github.com/users/cgarciae/received_events", "repos_url": "https://api.github.com/users/cgarciae/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cgarciae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cgarciae/subscriptions", "type": "User", "url": "https://api.github.com/users/cgarciae" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", ...
null
[ "A quick fix for now is doing this:\r\n\r\n```python\r\nX_train = np.stack(dataset[\"train\"][\"image\"])[..., None]", "This error also propagates to jax and is even trickier to fix, since `.with_format(type='jax')` will use numpy conversion internally (and fail). For a three line failure:\r\n\r\n```python\r\ndat...
2022-01-09T17:18:13Z
2022-10-14T12:03:55Z
2022-10-14T12:03:54Z
NONE
null
null
null
## Describe the bug `dataset.set_format("np")` no longer works for image data, previously you could load the MNIST like this: ```python dataset = load_dataset("mnist") dataset.set_format("np") X_train = dataset["train"]["image"][..., None] # <== No longer a numpy array ``` but now it doesn't work, `set_format("np")` seems to have no effect and the dataset just returns a list/array of PIL images instead of numpy arrays as requested.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3553/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3553/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3065
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3065/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3065/comments
https://api.github.com/repos/huggingface/datasets/issues/3065/events
https://github.com/huggingface/datasets/pull/3065
1,023,951,322
PR_kwDODunzps4tFDjk
3,065
Fix test command after refac
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2021-10-12T15:23:30Z
2021-10-12T15:28:47Z
2021-10-12T15:28:46Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3065.diff", "html_url": "https://github.com/huggingface/datasets/pull/3065", "merged_at": "2021-10-12T15:28:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/3065.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3065" }
Fix the `datasets-cli` test command after the `prepare_module` change in #2986
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3065/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3065/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5106
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5106/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5106/comments
https://api.github.com/repos/huggingface/datasets/issues/5106/events
https://github.com/huggingface/datasets/pull/5106
1,406,635,758
PR_kwDODunzps5ArM6G
5,106
Fix task template reload from dict
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Just wondering if there might be other data classes default values missed that could cause an issue... Apart from feature-like classes and tasks, I don't see any others though...\r\n\r\nI think we're good ! `asdict` is used on the ...
2022-10-12T18:33:49Z
2022-10-13T09:59:07Z
2022-10-13T09:56:51Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5106.diff", "html_url": "https://github.com/huggingface/datasets/pull/5106", "merged_at": "2022-10-13T09:56:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/5106.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5106" }
Since #4926 the JSON dumps are simplified and it made task template dicts empty by default. I fixed this by always including the task name which is needed to reload a task from a dict
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5106/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5106/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4540
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4540/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4540/comments
https://api.github.com/repos/huggingface/datasets/issues/4540/events
https://github.com/huggingface/datasets/issues/4540
1,280,142,942
I_kwDODunzps5MTW5e
4,540
Avoid splitting by` .py` for the file.
{ "avatar_url": "https://avatars.githubusercontent.com/u/18573157?v=4", "events_url": "https://api.github.com/users/espoirMur/events{/privacy}", "followers_url": "https://api.github.com/users/espoirMur/followers", "following_url": "https://api.github.com/users/espoirMur/following{/other_user}", "gists_url": "https://api.github.com/users/espoirMur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/espoirMur", "id": 18573157, "login": "espoirMur", "node_id": "MDQ6VXNlcjE4NTczMTU3", "organizations_url": "https://api.github.com/users/espoirMur/orgs", "received_events_url": "https://api.github.com/users/espoirMur/received_events", "repos_url": "https://api.github.com/users/espoirMur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/espoirMur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/espoirMur/subscriptions", "type": "User", "url": "https://api.github.com/users/espoirMur" }
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/20517962?v=4", "events_url": "https://api.github.com/users/VijayKalmath/events{/privacy}", "followers_url": "https://api.github.com/users/VijayKalmath/followers", "following_url": "https://api.github.com/users/VijayKalmath/following{/other_user}", "gists_url": "https://api.github.com/users/VijayKalmath/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VijayKalmath", "id": 20517962, "login": "VijayKalmath", "node_id": "MDQ6VXNlcjIwNTE3OTYy", "organizations_url": "https://api.github.com/users/VijayKalmath/orgs", "received_events_url": "https://api.github.com/users/VijayKalmath/received_events", "repos_url": "https://api.github.com/users/VijayKalmath/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VijayKalmath/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VijayKalmath/subscriptions", "type": "User", "url": "https://api.github.com/users/VijayKalmath" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/20517962?v=4", "events_url": "https://api.github.com/users/VijayKalmath/events{/privacy}", "followers_url": "https://api.github.com/users/VijayKalmath/followers", "following_url": "https://api.github.com/users/VijayKalmath/following{/other_use...
null
[ "Hi @espoirMur, thanks for reporting.\r\n\r\nYou are right: that code line could be improved and made more generically valid.\r\n\r\nOn the other hand, I would suggest using `os.path.splitext` instead.\r\n\r\nAre you willing to open a PR? :)", "I will have a look.. \r\n\r\nThis weekend .. ", "@albertvillanova ...
2022-06-22T13:26:55Z
2022-07-07T13:17:44Z
2022-07-07T13:17:44Z
NONE
null
null
null
https://github.com/huggingface/datasets/blob/90b3a98065556fc66380cafd780af9b1814b9426/src/datasets/load.py#L272 Hello, Thanks you for this library . I was using it and I had one edge case. my home folder name ends with `.py` it is `/home/espoir.py` so anytime I am running the code to load a local module this code here it is failing because after splitting it is trying to save the code to my home directory. Step to reproduce. - If you have a home folder which ends with `.py` - load a module with a local folder `qa_dataset = load_dataset("src/data/build_qa_dataset.py")` it is failed A possible workaround would be to use pathlib at the mentioned line ` meta_path = Path(importable_local_file).parent.joinpath("metadata.json")` this can alivate the issue . Let me what are your thought on this and I can try to fix it by A PR.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4540/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4540/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2420
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2420/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2420/comments
https://api.github.com/repos/huggingface/datasets/issues/2420/events
https://github.com/huggingface/datasets/pull/2420
904,821,772
MDExOlB1bGxSZXF1ZXN0NjU1OTQ1ODgw
2,420
Updated Dataset Description
{ "avatar_url": "https://avatars.githubusercontent.com/u/10741860?v=4", "events_url": "https://api.github.com/users/binny-mathew/events{/privacy}", "followers_url": "https://api.github.com/users/binny-mathew/followers", "following_url": "https://api.github.com/users/binny-mathew/following{/other_user}", "gists_url": "https://api.github.com/users/binny-mathew/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/binny-mathew", "id": 10741860, "login": "binny-mathew", "node_id": "MDQ6VXNlcjEwNzQxODYw", "organizations_url": "https://api.github.com/users/binny-mathew/orgs", "received_events_url": "https://api.github.com/users/binny-mathew/received_events", "repos_url": "https://api.github.com/users/binny-mathew/repos", "site_admin": false, "starred_url": "https://api.github.com/users/binny-mathew/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/binny-mathew/subscriptions", "type": "User", "url": "https://api.github.com/users/binny-mathew" }
[]
closed
false
null
[]
null
[]
2021-05-28T07:10:51Z
2021-06-10T12:11:35Z
2021-06-10T12:11:35Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2420.diff", "html_url": "https://github.com/huggingface/datasets/pull/2420", "merged_at": "2021-06-10T12:11:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/2420.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2420" }
Added Point of contact information and several other details about the dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2420/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2420/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2908
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2908/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2908/comments
https://api.github.com/repos/huggingface/datasets/issues/2908/events
https://github.com/huggingface/datasets/pull/2908
995,970,612
PR_kwDODunzps4rumwW
2,908
Update Zenodo metadata with creator names and affiliation
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-09-14T12:39:37Z
2021-09-14T14:29:25Z
2021-09-14T14:29:25Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2908.diff", "html_url": "https://github.com/huggingface/datasets/pull/2908", "merged_at": "2021-09-14T14:29:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/2908.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2908" }
This PR helps in prefilling author data when automatically generating the DOI after each release.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2908/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2908/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3030
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3030/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3030/comments
https://api.github.com/repos/huggingface/datasets/issues/3030/events
https://github.com/huggingface/datasets/pull/3030
1,016,435,324
PR_kwDODunzps4ss41W
3,030
Add `remove_columns` to `IterableDataset`
{ "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cccntu", "id": 31893406, "login": "cccntu", "node_id": "MDQ6VXNlcjMxODkzNDA2", "organizations_url": "https://api.github.com/users/cccntu/orgs", "received_events_url": "https://api.github.com/users/cccntu/received_events", "repos_url": "https://api.github.com/users/cccntu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "type": "User", "url": "https://api.github.com/users/cccntu" }
[]
closed
false
null
[]
null
[ "Thanks ! That looks all good :)\r\n\r\nI don't think that batching would help. Indeed we're dealing with python iterators that yield elements one by one, so batched `map` needs to accumulate a batch, apply the function, and then yield examples from the batch.\r\n\r\nThough once we have parallel processing in `map`...
2021-10-05T14:58:33Z
2021-10-08T15:33:15Z
2021-10-08T15:31:53Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3030.diff", "html_url": "https://github.com/huggingface/datasets/pull/3030", "merged_at": "2021-10-08T15:31:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/3030.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3030" }
Fixes #2944 WIP * Not tested yet. * We might want to allow batched remove for efficiency. @lhoestq Do you think it should have `batched=` and `batch_size=`?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3030/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3030/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4174
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4174/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4174/comments
https://api.github.com/repos/huggingface/datasets/issues/4174/events
https://github.com/huggingface/datasets/pull/4174
1,205,575,941
PR_kwDODunzps42SnJS
4,174
Fix when map function modifies input in-place
{ "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thomasw21", "id": 24695242, "login": "thomasw21", "node_id": "MDQ6VXNlcjI0Njk1MjQy", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "repos_url": "https://api.github.com/users/thomasw21/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "type": "User", "url": "https://api.github.com/users/thomasw21" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-15T13:23:15Z
2022-04-15T14:52:07Z
2022-04-15T14:45:58Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4174.diff", "html_url": "https://github.com/huggingface/datasets/pull/4174", "merged_at": "2022-04-15T14:45:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/4174.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4174" }
When `function` modifies input in-place, the guarantee that columns in `remove_columns` are contained in `input` doesn't hold true anymore. Therefore we need to relax way we pop elements by checking if that column exists.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4174/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4174/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5268/comments
https://api.github.com/repos/huggingface/datasets/issues/5268/events
https://github.com/huggingface/datasets/pull/5268
1,455,633,978
PR_kwDODunzps5DPIsp
5,268
Sharded save_to_disk + multiprocessing
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Added both num_shards and max_shard_size in push_to_hub/save_to_disk. Will take care of updating the tests later", "It's ready for a final review @mariosasko and @albertvillanova, let me know what you think :)", "Took your commen...
2022-11-18T18:50:01Z
2022-12-14T18:25:52Z
2022-12-14T18:22:58Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5268.diff", "html_url": "https://github.com/huggingface/datasets/pull/5268", "merged_at": "2022-12-14T18:22:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/5268.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5268" }
Added `num_shards=` and `num_proc=` to `save_to_disk()` EDIT: also added `max_shard_size=` to `save_to_disk()`, and also `num_shards=` to `push_to_hub` I also: - deprecated the fs parameter in favor of storage_options (for consistency with the rest of the lib) in save_to_disk and load_from_disk - always embed the image/audio data in arrow when doing `save_to_disk` - added a tqdm bar in `save_to_disk` - Use the MockFileSystem in tests for `save_to_disk` and `load_from_disk` - removed the unused integration tests with S3, since we can now test with `mockfs` instead of `s3fs` TODO: - [x] implem save_to_disk for dataset dict - [x] save_to_disk for dataset dict tests - [x] deprecate fs in dataset dict load_from_disk as well - [x] update docs Close #5263 Close https://github.com/huggingface/datasets/issues/4196 Close https://github.com/huggingface/datasets/issues/4351
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5268/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5268/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5914/comments
https://api.github.com/repos/huggingface/datasets/issues/5914/events
https://github.com/huggingface/datasets/issues/5914
1,731,483,996
I_kwDODunzps5nNFlc
5,914
array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size in Datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/85110830?v=4", "events_url": "https://api.github.com/users/ravenouse/events{/privacy}", "followers_url": "https://api.github.com/users/ravenouse/followers", "following_url": "https://api.github.com/users/ravenouse/following{/other_user}", "gists_url": "https://api.github.com/users/ravenouse/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ravenouse", "id": 85110830, "login": "ravenouse", "node_id": "MDQ6VXNlcjg1MTEwODMw", "organizations_url": "https://api.github.com/users/ravenouse/orgs", "received_events_url": "https://api.github.com/users/ravenouse/received_events", "repos_url": "https://api.github.com/users/ravenouse/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ravenouse/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ravenouse/subscriptions", "type": "User", "url": "https://api.github.com/users/ravenouse" }
[]
open
false
null
[]
null
[]
2023-05-30T04:25:00Z
2023-05-30T04:25:00Z
null
NONE
null
null
null
### Describe the bug When using the `filter` or `map` function to preprocess a dataset, a ValueError is encountered with the error message "array is too big; arr.size * arr.dtype.itemsize is larger than the maximum possible size." Detailed error message: Traceback (most recent call last): File "data_processing.py", line 26, in <module> processed_dataset[split] = samromur_children[split].map(prepare_dataset, cache_file_name=cache_dict[split],writer_batch_size = 50) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2405, in map desc=desc, File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 524, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/fingerprint.py", line 480, in wrapper out = func(self, *args, **kwargs) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2756, in _map_single example = apply_function_on_filtered_inputs(example, i, offset=offset) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2347, in decorated result = f(decorated_item, *args, **kwargs) File "data_processing.py", line 11, in prepare_dataset audio = batch["audio"] File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 123, in __getitem__ value = decode_nested_example(self.features[key], value) if value is not None else None File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/features.py", line 1260, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/audio.py", line 156, in decode_example array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/audio.py", line 257, in _decode_non_mp3_path_like array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/librosa/core/audio.py", line 176, in load y, sr_native = __soundfile_load(path, offset, duration, dtype) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/librosa/core/audio.py", line 222, in __soundfile_load y = sf_desc.read(frames=frame_duration, dtype=dtype, always_2d=False).T File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/soundfile.py", line 891, in read out = self._create_empty_array(frames, always_2d, dtype) File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/soundfile.py", line 1323, in _create_empty_array return np.empty(shape, dtype, order='C') ValueError: array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size. ### Steps to reproduce the bug ```python from datasets import load_dataset, DatasetDict from transformers import WhisperFeatureExtractor from transformers import WhisperTokenizer samromur_children= load_dataset("language-and-voice-lab/samromur_children") feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small") tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", language="icelandic", task="transcribe") def prepare_dataset(batch): # load and resample audio data from 48 to 16kHz audio = batch["audio"] # compute log-Mel input features from input audio array batch["input_features"] = feature_extractor(audio["array"], sampling_rate=16000).input_features[0] # encode target text to label ids batch["labels"] = tokenizer(batch["normalized_text"]).input_ids return batch cache_dict = {"train": "./cache/audio_train.cache", \ "validation": "./cache/audio_validation.cache", \ "test": "./cache/audio_test.cache"} filter_cache_dict = {"train": "./cache/filter_train.arrow", \ "validation": "./cache/filter_validation.arrow", \ "test": "./cache/filter_test.arrow"} print("before filtering") print(samromur_children) #filter the dataset to only include examples with more than 2 seconds of audio samromur_children = samromur_children.filter(lambda example: example["audio"]["array"].shape[0] > 16000*2, cache_file_names=filter_cache_dict) print("after filtering") print(samromur_children) processed_dataset = DatasetDict() # processed_dataset = samromur_children.map(prepare_dataset, cache_file_names=cache_dict, num_proc=10,) for split in ["train", "validation", "test"]: processed_dataset[split] = samromur_children[split].map(prepare_dataset, cache_file_name=cache_dict[split]) ``` ### Expected behavior The dataset is successfully processed and ready to train the model. ### Environment info Python version: 3.7.13 datasets package version: 2.4.0 librosa package version: 0.10.0.post2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5914/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5914/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4792
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4792/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4792/comments
https://api.github.com/repos/huggingface/datasets/issues/4792/events
https://github.com/huggingface/datasets/issues/4792
1,328,593,929
I_kwDODunzps5PMLwJ
4,792
Add DocVQA
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
null
[ "Thanks for proposing, @NielsRogge.\r\n\r\nPlease, note this dataset requires registering in their website and their Terms and Conditions state we cannot distribute their URL:\r\n```\r\n1. You will NOT distribute the download URLs\r\n...\r\n```" ]
2022-08-04T13:07:26Z
2022-08-08T05:31:20Z
null
CONTRIBUTOR
null
null
null
## Adding a Dataset - **Name:** DocVQA - **Description:** Document Visual Question Answering (DocVQA) seeks to inspire a “purpose-driven” point of view in Document Analysis and Recognition research, where the document content is extracted and used to respond to high-level tasks defined by the human consumers of this information. - **Paper:** https://arxiv.org/abs/2007.00398 - **Data:** https://www.docvqa.org/datasets/docvqa - **Motivation:** Models like LayoutLM and Donut in the Transformers library are fine-tuned on DocVQA. Would be very handy to directly load this dataset from the hub. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4792/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4792/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5518
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5518/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5518/comments
https://api.github.com/repos/huggingface/datasets/issues/5518/events
https://github.com/huggingface/datasets/pull/5518
1,578,203,962
PR_kwDODunzps5Joom3
5,518
Remove py.typed
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-02-09T16:22:29Z
2023-02-13T13:55:49Z
2023-02-13T13:48:40Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5518.diff", "html_url": "https://github.com/huggingface/datasets/pull/5518", "merged_at": "2023-02-13T13:48:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/5518.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5518" }
Fix https://github.com/huggingface/datasets/issues/3841
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5518/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5518/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3813
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3813/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3813/comments
https://api.github.com/repos/huggingface/datasets/issues/3813/events
https://github.com/huggingface/datasets/issues/3813
1,158,474,859
I_kwDODunzps5FDOxr
3,813
Add MetaShift dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/osanseviero", "id": 7246357, "login": "osanseviero", "node_id": "MDQ6VXNlcjcyNDYzNTc=", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "repos_url": "https://api.github.com/users/osanseviero/repos", "site_admin": false, "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "type": "User", "url": "https://api.github.com/users/osanseviero" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc",...
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gi...
null
[ "I would like to take this up and give it a shot. Any image specific - dataset guidelines to keep in mind ? Thank you.", "#self-assign", "I've started working on adding this dataset. I require some inputs on the following : \r\n\r\nRef for the initial draft [here](https://github.com/dnaveenr/datasets/blob/add_m...
2022-03-03T14:26:45Z
2022-04-10T13:39:59Z
2022-04-10T13:39:59Z
MEMBER
null
null
null
## Adding a Dataset - **Name:** MetaShift - **Description:** collection of 12,868 sets of natural images across 410 classes- - **Paper:** https://arxiv.org/abs/2202.06523v1 - **Data:** https://github.com/weixin-liang/metashift Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3813/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3813/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1715
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1715/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1715/comments
https://api.github.com/repos/huggingface/datasets/issues/1715/events
https://github.com/huggingface/datasets/pull/1715
782,754,441
MDExOlB1bGxSZXF1ZXN0NTUyMjM2NDA5
1,715
add Korean intonation-aided intention identification dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[]
closed
false
null
[]
null
[]
2021-01-10T06:29:04Z
2021-09-17T16:54:13Z
2021-01-12T17:14:33Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1715.diff", "html_url": "https://github.com/huggingface/datasets/pull/1715", "merged_at": "2021-01-12T17:14:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/1715.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1715" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1715/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1715/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2281/comments
https://api.github.com/repos/huggingface/datasets/issues/2281/events
https://github.com/huggingface/datasets/pull/2281
870,792,784
MDExOlB1bGxSZXF1ZXN0NjI1OTI2MjAw
2,281
Update multi_woz_v22 checksum
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2021-04-29T09:09:11Z
2021-04-29T13:41:35Z
2021-04-29T13:41:34Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2281.diff", "html_url": "https://github.com/huggingface/datasets/pull/2281", "merged_at": "2021-04-29T13:41:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/2281.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2281" }
Fix issue https://github.com/huggingface/datasets/issues/1876 The files were changed in https://github.com/budzianowski/multiwoz/pull/72
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2281/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2281/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6081
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6081/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6081/comments
https://api.github.com/repos/huggingface/datasets/issues/6081/events
https://github.com/huggingface/datasets/pull/6081
1,824,486,278
PR_kwDODunzps5WjU0k
6,081
Deprecate `Dataset.export`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
2023-07-27T14:22:18Z
2023-07-28T11:09:54Z
2023-07-28T11:01:04Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6081.diff", "html_url": "https://github.com/huggingface/datasets/pull/6081", "merged_at": "2023-07-28T11:01:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/6081.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6081" }
Deprecate `Dataset.export` that generates a TFRecord file from a dataset as this method is undocumented, and the usage seems low. Users should use [TFRecordWriter](https://www.tensorflow.org/api_docs/python/tf/io/TFRecordWriter#write) or the official [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord) tutorial (on which this method is based) to write TFRecord files instead.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6081/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6081/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5694
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5694/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5694/comments
https://api.github.com/repos/huggingface/datasets/issues/5694/events
https://github.com/huggingface/datasets/issues/5694
1,650,467,793
I_kwDODunzps5iYCPR
5,694
Dataset configuration
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
open
false
null
[]
null
[ "Originally we also though about adding it to the YAML part of the README.md:\r\n\r\n```yaml\r\nbuilder_config:\r\n data_dir: data\r\n data_files:\r\n - split: train\r\n pattern: \"train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*\"\r\n```\r\n\r\nHaving it in the README.md could make it easier to mod...
2023-04-01T13:08:05Z
2023-04-04T14:54:37Z
null
MEMBER
null
null
null
Following discussions from https://github.com/huggingface/datasets/pull/5331 We could have something like `config.json` to define the configuration of a dataset. ```json { "data_dir": "data" "data_files": { "train": "train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*" } } ``` we could also support a list for several configs with a 'config_name' field. The alternative was to use YAML in the README.md. I think it could also support a `dataset_type` field to specify which dataset builder class to use, and the other parameters would be the builder's parameters. Some parameters exist for all builders like `data_files` and `data_dir`, but some parameters are builder specific like `sep` for csv. This format would be used in `push_to_hub` to be able to push multiple configs. cc @huggingface/datasets EDIT: actually we're going for the YAML approach in README.md
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/5694/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5694/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3097
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3097/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3097/comments
https://api.github.com/repos/huggingface/datasets/issues/3097/events
https://github.com/huggingface/datasets/issues/3097
1,027,750,811
I_kwDODunzps49Qjub
3,097
`ModuleNotFoundError: No module named 'fsspec.exceptions'`
{ "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VictorSanh", "id": 16107619, "login": "VictorSanh", "node_id": "MDQ6VXNlcjE2MTA3NjE5", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "repos_url": "https://api.github.com/users/VictorSanh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "type": "User", "url": "https://api.github.com/users/VictorSanh" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Thanks for reporting, @VictorSanh.\r\n\r\nI'm fixing it." ]
2021-10-15T19:34:38Z
2021-10-18T07:51:54Z
2021-10-18T07:51:54Z
MEMBER
null
null
null
## Describe the bug I keep runnig into a fsspec ModuleNotFound error ## Steps to reproduce the bug ```python >>> from datasets import get_dataset_infos 2021-10-15 15:25:37.863206: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-10-15 15:25:37.863252: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/__init__.py", line 37, in <module> from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 56, in <module> from .utils.streaming_download_manager import StreamingDownloadManager File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/streaming_download_manager.py", line 11, in <module> from fsspec.exceptions import FSTimeoutError ModuleNotFoundError: No module named 'fsspec.exceptions' ``` Yet, I do have `fsspec`: ```bash hf@victor-scale:~/dev/promptsource$ pip show fsspec Name: fsspec Version: 2021.5.0 Summary: File-system specification Home-page: http://github.com/intake/filesystem_spec Author: None Author-email: None License: BSD Location: /home/hf/dev/promptsource/.venv/lib/python3.7/site-packages Requires: Required-by: datasets ``` With the same version of fsspec and `datasets==1.9.0`, I don't see this problem.... ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> I can't even run `datasets-cli env` actually.., but here's my env: - `datasets` version: 1.13.3 - Platform: Ubuntu 18.04 - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3097/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3097/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4189
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4189/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4189/comments
https://api.github.com/repos/huggingface/datasets/issues/4189/events
https://github.com/huggingface/datasets/pull/4189
1,209,881,351
PR_kwDODunzps42gGv5
4,189
Document how to use FAISS index for special operations
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-20T15:51:56Z
2022-05-06T08:43:10Z
2022-05-06T08:35:52Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4189.diff", "html_url": "https://github.com/huggingface/datasets/pull/4189", "merged_at": "2022-05-06T08:35:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/4189.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4189" }
Document how to use FAISS index for special operations, by accessing the index itself. Close #4029.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4189/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4189/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1403
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1403/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1403/comments
https://api.github.com/repos/huggingface/datasets/issues/1403/events
https://github.com/huggingface/datasets/pull/1403
760,571,419
MDExOlB1bGxSZXF1ZXN0NTM1MzgxMzQ3
1,403
Add dataset clickbait_news_bg
{ "avatar_url": "https://avatars.githubusercontent.com/u/1083319?v=4", "events_url": "https://api.github.com/users/tsvm/events{/privacy}", "followers_url": "https://api.github.com/users/tsvm/followers", "following_url": "https://api.github.com/users/tsvm/following{/other_user}", "gists_url": "https://api.github.com/users/tsvm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tsvm", "id": 1083319, "login": "tsvm", "node_id": "MDQ6VXNlcjEwODMzMTk=", "organizations_url": "https://api.github.com/users/tsvm/orgs", "received_events_url": "https://api.github.com/users/tsvm/received_events", "repos_url": "https://api.github.com/users/tsvm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tsvm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tsvm/subscriptions", "type": "User", "url": "https://api.github.com/users/tsvm" }
[]
closed
false
null
[]
null
[ "Closing this pull request, will submit a new one for this dataset." ]
2020-12-09T18:32:12Z
2020-12-10T09:16:44Z
2020-12-10T09:16:43Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1403.diff", "html_url": "https://github.com/huggingface/datasets/pull/1403", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1403.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1403" }
Adding a new dataset - clickbait_news_bg
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1403/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1403/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2910
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2910/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2910/comments
https://api.github.com/repos/huggingface/datasets/issues/2910/events
https://github.com/huggingface/datasets/pull/2910
996,149,632
PR_kwDODunzps4rvL9N
2,910
feat: 🎸 pass additional arguments to get private configs + info
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[]
closed
false
null
[]
null
[ "Included in https://github.com/huggingface/datasets/pull/2906" ]
2021-09-14T15:24:19Z
2021-09-15T16:19:09Z
2021-09-15T16:19:06Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2910.diff", "html_url": "https://github.com/huggingface/datasets/pull/2910", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2910.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2910" }
`use_auth_token` can now be passed to the functions to get the configs or infos of private datasets on the hub
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2910/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2910/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2409
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2409/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2409/comments
https://api.github.com/repos/huggingface/datasets/issues/2409/events
https://github.com/huggingface/datasets/pull/2409
903,441,398
MDExOlB1bGxSZXF1ZXN0NjU0Njk3NjA0
2,409
Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "I thought the renaming was suggested only for the env var, and not for the config variable... As you think is better! ;)", "I think it's better if they match, so that users understand directly that they're directly connected", "Well, if you're not concerned about back-compat here, perhaps it could be renamed a...
2021-05-27T09:07:00Z
2021-06-08T16:00:55Z
2021-05-27T09:33:41Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2409.diff", "html_url": "https://github.com/huggingface/datasets/pull/2409", "merged_at": "2021-05-27T09:33:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/2409.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2409" }
As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2409/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2409/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4994
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4994/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4994/comments
https://api.github.com/repos/huggingface/datasets/issues/4994/events
https://github.com/huggingface/datasets/issues/4994
1,379,084,015
I_kwDODunzps5SMybv
4,994
delete the hardcoded license list in `datasets`
{ "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/julien-c", "id": 326577, "login": "julien-c", "node_id": "MDQ6VXNlcjMyNjU3Nw==", "organizations_url": "https://api.github.com/users/julien-c/orgs", "received_events_url": "https://api.github.com/users/julien-c/received_events", "repos_url": "https://api.github.com/users/julien-c/repos", "site_admin": false, "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "type": "User", "url": "https://api.github.com/users/julien-c" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[]
2022-09-20T09:14:41Z
2022-09-22T11:45:47Z
2022-09-22T11:45:47Z
MEMBER
null
null
null
> Feel free to delete the license list in `datasets` [...] > > Also FYI in #4926 I also removed all the validation steps anyway (language, license, types etc.) _Originally posted by @lhoestq in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238401662_ > [...], in my opinion we can just delete this file from `datasets`, the validation is happening hub-side anyways now? _Originally posted by @julien-c in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238390659_
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4994/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4994/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5113
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5113/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5113/comments
https://api.github.com/repos/huggingface/datasets/issues/5113/events
https://github.com/huggingface/datasets/pull/5113
1,409,207,607
PR_kwDODunzps5Az0Ei
5,113
Fix filter indices when batched
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think a patch release will be necessary.", "I'm also fixing https://github.com/huggingface/datasets/issues/5111 which will lalso require a patch release" ]
2022-10-14T11:30:03Z
2022-10-24T06:21:09Z
2022-10-14T12:11:44Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5113.diff", "html_url": "https://github.com/huggingface/datasets/pull/5113", "merged_at": "2022-10-14T12:11:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/5113.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5113" }
This PR fixes a bug introduced by: - #5030 Fix #5112.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5113/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5113/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2629/comments
https://api.github.com/repos/huggingface/datasets/issues/2629/events
https://github.com/huggingface/datasets/issues/2629
941,819,205
MDU6SXNzdWU5NDE4MTkyMDU=
2,629
Load datasets from the Hub without requiring a dataset script
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists...
null
[ "This is so cool, let us know if we can help with anything on the hub side (@Pierrci @elishowk) 🎉 " ]
2021-07-12T08:45:17Z
2021-08-25T14:18:08Z
2021-08-25T14:18:08Z
MEMBER
null
null
null
As a user I would like to be able to upload my csv/json/text/parquet/etc. files in a dataset repository on the Hugging Face Hub and be able to load this dataset with `load_dataset` without having to implement a dataset script. Moreover I would like to be able to specify which file goes into which split using the `data_files` argument. This feature should be compatible with private repositories and dataset streaming. This can be implemented by checking the extension of the files in the dataset repository and then by using the right dataset builder that is already packaged in the library (csv/json/text/parquet/etc.)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 7, "hooray": 2, "laugh": 0, "rocket": 2, "total_count": 11, "url": "https://api.github.com/repos/huggingface/datasets/issues/2629/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2629/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3203
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3203/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3203/comments
https://api.github.com/repos/huggingface/datasets/issues/3203/events
https://github.com/huggingface/datasets/pull/3203
1,043,552,766
PR_kwDODunzps4uCNoT
3,203
Updated: DaNE - updated URL for download
{ "avatar_url": "https://avatars.githubusercontent.com/u/47593213?v=4", "events_url": "https://api.github.com/users/MalteHB/events{/privacy}", "followers_url": "https://api.github.com/users/MalteHB/followers", "following_url": "https://api.github.com/users/MalteHB/following{/other_user}", "gists_url": "https://api.github.com/users/MalteHB/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MalteHB", "id": 47593213, "login": "MalteHB", "node_id": "MDQ6VXNlcjQ3NTkzMjEz", "organizations_url": "https://api.github.com/users/MalteHB/orgs", "received_events_url": "https://api.github.com/users/MalteHB/received_events", "repos_url": "https://api.github.com/users/MalteHB/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MalteHB/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MalteHB/subscriptions", "type": "User", "url": "https://api.github.com/users/MalteHB" }
[]
closed
false
null
[]
null
[ "Actually it looks like the old URL is still working, and it's also the one that is mentioned in https://github.com/alexandrainst/danlp/blob/master/docs/docs/datasets.md\r\n\r\nWhat makes you think we should use the new URL ?", "@lhoestq Sorry! I might have jumped to conclusions a bit too fast here... \r\n\r\nI w...
2021-11-03T12:55:13Z
2021-11-04T13:14:36Z
2021-11-04T11:46:43Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3203.diff", "html_url": "https://github.com/huggingface/datasets/pull/3203", "merged_at": "2021-11-04T11:46:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/3203.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3203" }
It seems that DaNLP has updated their download URLs and it therefore also needs to be updated in here...
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3203/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3203/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5721
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5721/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5721/comments
https://api.github.com/repos/huggingface/datasets/issues/5721/events
https://github.com/huggingface/datasets/issues/5721
1,659,680,682
I_kwDODunzps5i7Leq
5,721
Calling datasets.load_dataset("text" ...) results in a wrong split.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1841186?v=4", "events_url": "https://api.github.com/users/cyrilzakka/events{/privacy}", "followers_url": "https://api.github.com/users/cyrilzakka/followers", "following_url": "https://api.github.com/users/cyrilzakka/following{/other_user}", "gists_url": "https://api.github.com/users/cyrilzakka/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cyrilzakka", "id": 1841186, "login": "cyrilzakka", "node_id": "MDQ6VXNlcjE4NDExODY=", "organizations_url": "https://api.github.com/users/cyrilzakka/orgs", "received_events_url": "https://api.github.com/users/cyrilzakka/received_events", "repos_url": "https://api.github.com/users/cyrilzakka/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cyrilzakka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyrilzakka/subscriptions", "type": "User", "url": "https://api.github.com/users/cyrilzakka" }
[]
open
false
null
[]
null
[]
2023-04-08T23:55:12Z
2023-04-08T23:55:12Z
null
NONE
null
null
null
### Describe the bug When creating a text dataset, the training split should have the bulk of the examples by default. Currently, testing does. ### Steps to reproduce the bug I have a folder with 18K text files in it. Each text file essentially consists in a document or article scraped from online. Calling the following codeL ``` folder_path = "/home/cyril/Downloads/llama_dataset" data = datasets.load_dataset("text", data_dir=folder_path) data.save_to_disk("/home/cyril/Downloads/data.hf") data = datasets.load_from_disk("/home/cyril/Downloads/data.hf") print(data) ``` Results in the following split: ``` DatasetDict({ train: Dataset({ features: ['text'], num_rows: 2114 }) test: Dataset({ features: ['text'], num_rows: 200882 }) validation: Dataset({ features: ['text'], num_rows: 152 }) }) ``` It seems to me like the train/test/validation splits are in the wrong order since test split >>>> train_split ### Expected behavior Train split should have the bulk of the training examples. ### Environment info datasets 2.11.0, python 3.10.6
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5721/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5721/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6036
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6036/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6036/comments
https://api.github.com/repos/huggingface/datasets/issues/6036/events
https://github.com/huggingface/datasets/pull/6036
1,805,138,898
PR_kwDODunzps5ViKc4
6,036
Deprecate search API
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
open
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
2023-07-14T16:22:09Z
2023-09-07T16:44:32Z
null
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6036.diff", "html_url": "https://github.com/huggingface/datasets/pull/6036", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6036.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6036" }
The Search API only supports Faiss and ElasticSearch as vector stores, is somewhat difficult to maintain (e.g., it still doesn't support ElasticSeach 8.0, difficult testing, ...), does not have the best design (adds a bunch of methods to the `Dataset` class that are only useful after creating an index), the usage doesn't seem to be significant and is not integrated with the Hub. Since we have no plans/bandwidth to improve it and better alternatives such as `langchain` and `docarray` exist, I think it should be deprecated (and eventually removed). If we decide to deprecate/remove it, the following usage instances need to be addressed: * [Course](https://github.com/huggingface/course/blob/0018bb434204d9750a03592cb0d4e846093218d8/chapters/en/chapter5/6.mdx#L342 ) and [Blog](https://github.com/huggingface/blog/blob/4897c6f73d4492a0955ade503281711d01840e09/image-search-datasets.md?plain=1#L252) - calling the FAISS API directly should be OK in these instances as it's pretty simple to use for basic scenarios. Alternatively, we can use `langchain`, but this adds an extra dependency * [Transformers](https://github.com/huggingface/transformers/blob/50726f9ea7afc6113da617f8f4ca1ab264a5e28a/src/transformers/models/rag/retrieval_rag.py#L183) - we can use the FAISS API directly and store the index as a separate attribute (and instead of building the `wiki_dpr` index each time the dataset is generated, we can generate it once and push it to the Hub repo, and then read it from there cc @huggingface/datasets @LysandreJik for the opinion
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6036/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6036/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1806
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1806/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1806/comments
https://api.github.com/repos/huggingface/datasets/issues/1806/events
https://github.com/huggingface/datasets/pull/1806
798,607,869
MDExOlB1bGxSZXF1ZXN0NTY1Mzk0ODIz
1,806
Update details to MLSUM dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/15138872?v=4", "events_url": "https://api.github.com/users/padipadou/events{/privacy}", "followers_url": "https://api.github.com/users/padipadou/followers", "following_url": "https://api.github.com/users/padipadou/following{/other_user}", "gists_url": "https://api.github.com/users/padipadou/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/padipadou", "id": 15138872, "login": "padipadou", "node_id": "MDQ6VXNlcjE1MTM4ODcy", "organizations_url": "https://api.github.com/users/padipadou/orgs", "received_events_url": "https://api.github.com/users/padipadou/received_events", "repos_url": "https://api.github.com/users/padipadou/repos", "site_admin": false, "starred_url": "https://api.github.com/users/padipadou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/padipadou/subscriptions", "type": "User", "url": "https://api.github.com/users/padipadou" }
[]
closed
false
null
[]
null
[ "Thanks!" ]
2021-02-01T18:35:12Z
2021-02-01T18:46:28Z
2021-02-01T18:46:21Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1806.diff", "html_url": "https://github.com/huggingface/datasets/pull/1806", "merged_at": "2021-02-01T18:46:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/1806.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1806" }
Update details to MLSUM dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1806/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1806/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1522
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1522/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1522/comments
https://api.github.com/repos/huggingface/datasets/issues/1522/events
https://github.com/huggingface/datasets/pull/1522
764,341,594
MDExOlB1bGxSZXF1ZXN0NTM4NDUzNjg4
1,522
Add semeval 2020 task 11
{ "avatar_url": "https://avatars.githubusercontent.com/u/7950786?v=4", "events_url": "https://api.github.com/users/ZacharySBrown/events{/privacy}", "followers_url": "https://api.github.com/users/ZacharySBrown/followers", "following_url": "https://api.github.com/users/ZacharySBrown/following{/other_user}", "gists_url": "https://api.github.com/users/ZacharySBrown/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ZacharySBrown", "id": 7950786, "login": "ZacharySBrown", "node_id": "MDQ6VXNlcjc5NTA3ODY=", "organizations_url": "https://api.github.com/users/ZacharySBrown/orgs", "received_events_url": "https://api.github.com/users/ZacharySBrown/received_events", "repos_url": "https://api.github.com/users/ZacharySBrown/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ZacharySBrown/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZacharySBrown/subscriptions", "type": "User", "url": "https://api.github.com/users/ZacharySBrown" }
[]
closed
false
null
[]
null
[ "@SBrandeis : Thanks for the feedback! Just updated to use context manager for the `open`s and removed the placeholder text from the `README`!", "Great, thanks @ZacharySBrown !\r\nFailing tests seem to be unrelated to your changes, merging the current master branch into yours should fix them.\r\n" ]
2020-12-12T20:32:14Z
2020-12-15T16:48:52Z
2020-12-15T16:48:52Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1522.diff", "html_url": "https://github.com/huggingface/datasets/pull/1522", "merged_at": "2020-12-15T16:48:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/1522.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1522" }
Adding in propaganda detection task (task 11) from Sem Eval 2020
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1522/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1522/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1736
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1736/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1736/comments
https://api.github.com/repos/huggingface/datasets/issues/1736/events
https://github.com/huggingface/datasets/pull/1736
785,433,854
MDExOlB1bGxSZXF1ZXN0NTU0NDYyNjYw
1,736
Adjust BrWaC dataset features name
{ "avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4", "events_url": "https://api.github.com/users/jonatasgrosman/events{/privacy}", "followers_url": "https://api.github.com/users/jonatasgrosman/followers", "following_url": "https://api.github.com/users/jonatasgrosman/following{/other_user}", "gists_url": "https://api.github.com/users/jonatasgrosman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonatasgrosman", "id": 5097052, "login": "jonatasgrosman", "node_id": "MDQ6VXNlcjUwOTcwNTI=", "organizations_url": "https://api.github.com/users/jonatasgrosman/orgs", "received_events_url": "https://api.github.com/users/jonatasgrosman/received_events", "repos_url": "https://api.github.com/users/jonatasgrosman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonatasgrosman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonatasgrosman/subscriptions", "type": "User", "url": "https://api.github.com/users/jonatasgrosman" }
[]
closed
false
null
[]
null
[]
2021-01-13T20:39:04Z
2021-01-14T10:29:38Z
2021-01-14T10:29:38Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1736.diff", "html_url": "https://github.com/huggingface/datasets/pull/1736", "merged_at": "2021-01-14T10:29:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/1736.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1736" }
I added this dataset some days ago, and today I used it to train some models and realized that the names of the features aren't so good. Looking at the current features hierarchy, we have "paragraphs" with a list of "sentences" with a list of "sentences?!". But the actual hierarchy is a "text" with a list of "paragraphs" with a list of "sentences". I confused myself trying to use the dataset with these names. So I think it's better to change it.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1736/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1736/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1551
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1551/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1551/comments
https://api.github.com/repos/huggingface/datasets/issues/1551/events
https://github.com/huggingface/datasets/pull/1551
765,621,879
MDExOlB1bGxSZXF1ZXN0NTM5MDEwNDAy
1,551
Monero
{ "avatar_url": "https://avatars.githubusercontent.com/u/2815308?v=4", "events_url": "https://api.github.com/users/iliemihai/events{/privacy}", "followers_url": "https://api.github.com/users/iliemihai/followers", "following_url": "https://api.github.com/users/iliemihai/following{/other_user}", "gists_url": "https://api.github.com/users/iliemihai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iliemihai", "id": 2815308, "login": "iliemihai", "node_id": "MDQ6VXNlcjI4MTUzMDg=", "organizations_url": "https://api.github.com/users/iliemihai/orgs", "received_events_url": "https://api.github.com/users/iliemihai/received_events", "repos_url": "https://api.github.com/users/iliemihai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iliemihai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliemihai/subscriptions", "type": "User", "url": "https://api.github.com/users/iliemihai" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
[ "Hi @iliemihai - you need to add the Readme file! Otherwise seems good. \r\nAlso don't forget to run `make style` & `flake8 datasets` locally, from the datasets folder", "@skyprince999 I will add the README.d for it. Thank you :D ", "Thanks for your contribution, @iliemihai. Are you still interested in adding ...
2020-12-13T19:56:48Z
2022-10-03T09:38:35Z
2022-10-03T09:38:35Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1551.diff", "html_url": "https://github.com/huggingface/datasets/pull/1551", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1551.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1551" }
Biomedical Romanian dataset :)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1551/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1551/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3966
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3966/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3966/comments
https://api.github.com/repos/huggingface/datasets/issues/3966/events
https://github.com/huggingface/datasets/pull/3966
1,173,883,084
PR_kwDODunzps40rBNE
3,966
Create metric card for BERTScore
{ "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sashavor", "id": 14205986, "login": "sashavor", "node_id": "MDQ6VXNlcjE0MjA1OTg2", "organizations_url": "https://api.github.com/users/sashavor/orgs", "received_events_url": "https://api.github.com/users/sashavor/received_events", "repos_url": "https://api.github.com/users/sashavor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "type": "User", "url": "https://api.github.com/users/sashavor" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-03-18T18:21:56Z
2022-03-22T13:35:28Z
2022-03-22T13:30:56Z
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3966.diff", "html_url": "https://github.com/huggingface/datasets/pull/3966", "merged_at": "2022-03-22T13:30:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/3966.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3966" }
Proposing a metric card for BERTScore
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3966/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3966/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4653
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4653/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4653/comments
https://api.github.com/repos/huggingface/datasets/issues/4653/events
https://github.com/huggingface/datasets/issues/4653
1,296,702,834
I_kwDODunzps5NSh1y
4,653
Add Altlex dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4", "events_url": "https://api.github.com/users/omarespejel/events{/privacy}", "followers_url": "https://api.github.com/users/omarespejel/followers", "following_url": "https://api.github.com/users/omarespejel/following{/other_user}", "gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/omarespejel", "id": 4755430, "login": "omarespejel", "node_id": "MDQ6VXNlcjQ3NTU0MzA=", "organizations_url": "https://api.github.com/users/omarespejel/orgs", "received_events_url": "https://api.github.com/users/omarespejel/received_events", "repos_url": "https://api.github.com/users/omarespejel/repos", "site_admin": false, "starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions", "type": "User", "url": "https://api.github.com/users/omarespejel" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[ "uploaded dataset [here](https://huggingface.co/datasets/embedding-data/altlex)." ]
2022-07-07T02:23:02Z
2022-07-14T02:12:39Z
2022-07-14T02:12:39Z
NONE
null
null
null
## Adding a Dataset - **Name:** *Altlex* - **Description:** *Git repository for software associated with the 2016 ACL paper "Identifying Causal Relations Using Parallel Wikipedia Articles.”* - **Paper:** *https://aclanthology.org/P16-1135.pdf* - **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/altlex.jsonl.gz* - **Motivation:** *Dataset for training and evaluating models of conversational response*
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4653/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4653/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2631
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2631/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2631/comments
https://api.github.com/repos/huggingface/datasets/issues/2631/events
https://github.com/huggingface/datasets/pull/2631
942,242,271
MDExOlB1bGxSZXF1ZXN0Njg3OTk3MzM2
2,631
Delete extracted files when loading dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "Sure @stas00, it is still a draft pull request. :)", "Yes, I noticed it after reviewing - my apologies.", "The problem with this approach is that it also deletes the downloaded files (if they need not be extracted). 😟 ", "> The problem with this approach is that it also deletes the downloaded files (if they...
2021-07-12T16:39:33Z
2021-07-19T09:08:19Z
2021-07-19T09:08:19Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2631.diff", "html_url": "https://github.com/huggingface/datasets/pull/2631", "merged_at": "2021-07-19T09:08:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/2631.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2631" }
Close #2481, close #2604, close #2591. cc: @stas00, @thomwolf, @BirgerMoell
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2631/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2631/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2271/comments
https://api.github.com/repos/huggingface/datasets/issues/2271/events
https://github.com/huggingface/datasets/issues/2271
869,002,141
MDU6SXNzdWU4NjkwMDIxNDE=
2,271
Synchronize table metadata with features
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "See PR #2274 " ]
2021-04-27T15:55:13Z
2022-06-01T17:13:21Z
2022-06-01T17:13:21Z
MEMBER
null
null
null
**Is your feature request related to a problem? Please describe.** As pointed out in this [comment](https://github.com/huggingface/datasets/pull/2145#discussion_r621326767): > Metadata stored in the schema is just a redundant information regarding the feature types. It is used when calling Dataset.from_file to know which feature types to use. These metadata are stored in the schema of the pyarrow table by using `update_metadata_with_features`. However this something that's almost never tested properly. **Describe the solution you'd like** We should find a way to always make sure that the metadata (in `self.data.schema.metadata`) are synced with the actual feature types (in `self.info.features`).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2271/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2271/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3687
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3687/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3687/comments
https://api.github.com/repos/huggingface/datasets/issues/3687/events
https://github.com/huggingface/datasets/issues/3687
1,127,154,766
I_kwDODunzps5DLwRO
3,687
Can't get the text data when calling to_tf_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/82086367?v=4", "events_url": "https://api.github.com/users/phrasenmaeher/events{/privacy}", "followers_url": "https://api.github.com/users/phrasenmaeher/followers", "following_url": "https://api.github.com/users/phrasenmaeher/following{/other_user}", "gists_url": "https://api.github.com/users/phrasenmaeher/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/phrasenmaeher", "id": 82086367, "login": "phrasenmaeher", "node_id": "MDQ6VXNlcjgyMDg2MzY3", "organizations_url": "https://api.github.com/users/phrasenmaeher/orgs", "received_events_url": "https://api.github.com/users/phrasenmaeher/received_events", "repos_url": "https://api.github.com/users/phrasenmaeher/repos", "site_admin": false, "starred_url": "https://api.github.com/users/phrasenmaeher/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phrasenmaeher/subscriptions", "type": "User", "url": "https://api.github.com/users/phrasenmaeher" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Rocketknight1", "id": 12866554, "login": "Rocketknight1", "node_id": "MDQ6VXNlcjEyODY2NTU0", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "type": "User", "url": "https://api.github.com/users/Rocketknight1" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_...
null
[ "cc @Rocketknight1 ", "You are correct that `to_tf_dataset` only handles numerical columns right now, yes, though this is a limitation we might remove in future! The main reason we do this is that our models mostly do not include the tokenizer as a model layer, because it's very difficult to compile some of them ...
2022-02-08T11:52:10Z
2023-01-19T14:55:18Z
2023-01-19T14:55:18Z
NONE
null
null
null
I am working with the SST2 dataset, and am using TensorFlow 2.5 I'd like to convert it to a `tf.data.Dataset` by calling the `to_tf_dataset` method. The following snippet is what I am using to achieve this: ``` from datasets import load_dataset from transformers import DefaultDataCollator data_collator = DefaultDataCollator(return_tensors="tf") dataset = load_dataset("sst") train_dataset = dataset["train"].to_tf_dataset(columns=['sentence'], label_cols="label", shuffle=True, batch_size=8,collate_fn=data_collator) ``` However, this only gets me the labels; the text--the most important part--is missing: ``` for s in train_dataset.take(1): print(s) #prints something like: ({}, <tf.Tensor: shape=(8,), ...>) ``` As you can see, it only returns the label part, not the data, as indicated by the empty dictionary, `{}`. So far, I've played with various settings of the method arguments, but to no avail; I do not want to perform any text processing at this time. On my quest to achieve what I want ( a `tf.data.Dataset`), I've consulted these resources: [https://www.philschmid.de/huggingface-transformers-keras-tf](https://www.philschmid.de/huggingface-transformers-keras-tf) [https://huggingface.co/docs/datasets/use_dataset.html?highlight=tensorflow](https://huggingface.co/docs/datasets/use_dataset.html?highlight=tensorflow) I was surprised to not find more extensive examples on how to transform a Hugginface dataset to one compatible with TensorFlow. If you could point me to where I am going wrong, please do so. Thanks in advance for your support. --- Edit: In the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.to_tf_dataset), I found the following description: _In general, only columns that the model can use as input should be included here (numeric data only)._ Does this imply that no textual, i.e., `string` data can be loaded?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3687/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3687/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4252
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4252/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4252/comments
https://api.github.com/repos/huggingface/datasets/issues/4252/events
https://github.com/huggingface/datasets/pull/4252
1,219,151,100
PR_kwDODunzps429--I
4,252
Creating metric card for MAE
{ "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sashavor", "id": 14205986, "login": "sashavor", "node_id": "MDQ6VXNlcjE0MjA1OTg2", "organizations_url": "https://api.github.com/users/sashavor/orgs", "received_events_url": "https://api.github.com/users/sashavor/received_events", "repos_url": "https://api.github.com/users/sashavor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "type": "User", "url": "https://api.github.com/users/sashavor" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-28T19:04:33Z
2022-04-29T16:59:11Z
2022-04-29T16:52:30Z
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4252.diff", "html_url": "https://github.com/huggingface/datasets/pull/4252", "merged_at": "2022-04-29T16:52:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/4252.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4252" }
Initial proposal for MAE metric card
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4252/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4252/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3554
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3554/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3554/comments
https://api.github.com/repos/huggingface/datasets/issues/3554/events
https://github.com/huggingface/datasets/issues/3554
1,097,711,367
I_kwDODunzps5Bbb8H
3,554
ImportError: cannot import name 'is_valid_waiter_error'
{ "avatar_url": "https://avatars.githubusercontent.com/u/84714841?v=4", "events_url": "https://api.github.com/users/danielbellhv/events{/privacy}", "followers_url": "https://api.github.com/users/danielbellhv/followers", "following_url": "https://api.github.com/users/danielbellhv/following{/other_user}", "gists_url": "https://api.github.com/users/danielbellhv/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/danielbellhv", "id": 84714841, "login": "danielbellhv", "node_id": "MDQ6VXNlcjg0NzE0ODQx", "organizations_url": "https://api.github.com/users/danielbellhv/orgs", "received_events_url": "https://api.github.com/users/danielbellhv/received_events", "repos_url": "https://api.github.com/users/danielbellhv/repos", "site_admin": false, "starred_url": "https://api.github.com/users/danielbellhv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danielbellhv/subscriptions", "type": "User", "url": "https://api.github.com/users/danielbellhv" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi! I can't reproduce this error in Colab, but I'm assuming you are using Amazon SageMaker Studio Notebooks (you mention the `conda_pytorch_p36` kernel), so maybe @philschmid knows more about what might be causing this issue? ", "Hey @mariosasko. Yes, I am using **Amazon SageMaker Studio Jupyter Labs**. However,...
2022-01-10T10:32:04Z
2022-02-14T09:35:57Z
2022-02-14T09:35:57Z
NONE
null
null
null
Based on [SO post](https://stackoverflow.com/q/70606147/17840900). I'm following along to this [Notebook][1], cell "**Loading the dataset**". Kernel: `conda_pytorch_p36`. I run: ``` ! pip install datasets transformers optimum[intel] ``` Output: ``` Requirement already satisfied: datasets in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (1.17.0) Requirement already satisfied: transformers in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (4.15.0) Requirement already satisfied: optimum[intel] in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (0.1.3) Requirement already satisfied: numpy>=1.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (1.19.5) Requirement already satisfied: dill in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.3.4) Requirement already satisfied: tqdm>=4.62.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (4.62.3) Requirement already satisfied: huggingface-hub<1.0.0,>=0.1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.2.1) Requirement already satisfied: packaging in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (21.3) Requirement already satisfied: pyarrow!=4.0.0,>=3.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (6.0.1) Requirement already satisfied: pandas in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (1.1.5) Requirement already satisfied: xxhash in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (2.0.2) Requirement already satisfied: aiohttp in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (3.8.1) Requirement already satisfied: fsspec[http]>=2021.05.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (2021.11.1) Requirement already satisfied: dataclasses in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.8) Requirement already satisfied: multiprocess in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (0.70.12.2) Requirement already satisfied: importlib-metadata in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (4.5.0) Requirement already satisfied: requests>=2.19.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from datasets) (2.25.1) Requirement already satisfied: pyyaml>=5.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (5.4.1) Requirement already satisfied: regex!=2019.12.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (2021.4.4) Requirement already satisfied: tokenizers<0.11,>=0.10.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (0.10.3) Requirement already satisfied: filelock in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (3.0.12) Requirement already satisfied: sacremoses in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers) (0.0.46) Requirement already satisfied: torch>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (1.10.1) Requirement already satisfied: sympy in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (1.8) Requirement already satisfied: coloredlogs in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (15.0.1) Requirement already satisfied: pycocotools in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (2.0.3) Requirement already satisfied: neural-compressor>=1.7 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum[intel]) (1.9) Requirement already satisfied: typing-extensions>=3.7.4.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from huggingface-hub<1.0.0,>=0.1.0->datasets) (3.10.0.0) Requirement already satisfied: sigopt in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (8.2.0) Requirement already satisfied: opencv-python in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (4.5.1.48) Requirement already satisfied: cryptography in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (3.4.7) Requirement already satisfied: py-cpuinfo in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (8.0.0) Requirement already satisfied: gevent in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (21.1.2) Requirement already satisfied: schema in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.7.5) Requirement already satisfied: psutil in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (5.8.0) Requirement already satisfied: gevent-websocket in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.10.1) Requirement already satisfied: hyperopt in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.2.7) Requirement already satisfied: Flask in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (2.0.1) Requirement already satisfied: prettytable in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (2.5.0) Requirement already satisfied: Flask-SocketIO in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (5.1.1) Requirement already satisfied: scikit-learn in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (0.24.2) Requirement already satisfied: Pillow in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (8.4.0) Requirement already satisfied: Flask-Cors in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from neural-compressor>=1.7->optimum[intel]) (3.0.10) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging->datasets) (2.4.7) Requirement already satisfied: chardet<5,>=3.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (4.0.0) Requirement already satisfied: certifi>=2017.4.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (2021.5.30) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (1.26.5) Requirement already satisfied: idna<3,>=2.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests>=2.19.0->datasets) (2.10) Requirement already satisfied: yarl<2.0,>=1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.6.3) Requirement already satisfied: charset-normalizer<3.0,>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (2.0.9) Requirement already satisfied: attrs>=17.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (21.2.0) Requirement already satisfied: asynctest==0.13.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (0.13.0) Requirement already satisfied: idna-ssl>=1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.1.0) Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (4.0.1) Requirement already satisfied: aiosignal>=1.1.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.2.0) Requirement already satisfied: frozenlist>=1.1.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (1.2.0) Requirement already satisfied: multidict<7.0,>=4.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from aiohttp->datasets) (5.1.0) Requirement already satisfied: humanfriendly>=9.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from coloredlogs->optimum[intel]) (10.0) Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata->datasets) (3.4.1) Requirement already satisfied: python-dateutil>=2.7.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pandas->datasets) (2.8.1) Requirement already satisfied: pytz>=2017.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pandas->datasets) (2021.1) Requirement already satisfied: matplotlib>=2.1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pycocotools->optimum[intel]) (3.3.4) Requirement already satisfied: cython>=0.27.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pycocotools->optimum[intel]) (0.29.23) Requirement already satisfied: setuptools>=18.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pycocotools->optimum[intel]) (52.0.0.post20210125) Requirement already satisfied: joblib in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers) (1.0.1) Requirement already satisfied: click in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers) (8.0.1) Requirement already satisfied: six in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers) (1.16.0) Requirement already satisfied: mpmath>=0.19 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sympy->optimum[intel]) (1.2.1) Requirement already satisfied: kiwisolver>=1.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from matplotlib>=2.1.0->pycocotools->optimum[intel]) (1.3.1) Requirement already satisfied: cycler>=0.10 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/cycler-0.10.0-py3.6.egg (from matplotlib>=2.1.0->pycocotools->optimum[intel]) (0.10.0) Requirement already satisfied: cffi>=1.12 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from cryptography->neural-compressor>=1.7->optimum[intel]) (1.14.5) Requirement already satisfied: Werkzeug>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask->neural-compressor>=1.7->optimum[intel]) (2.0.2) Requirement already satisfied: Jinja2>=3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask->neural-compressor>=1.7->optimum[intel]) (3.0.1) Requirement already satisfied: itsdangerous>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask->neural-compressor>=1.7->optimum[intel]) (2.0.1) Requirement already satisfied: python-socketio>=5.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Flask-SocketIO->neural-compressor>=1.7->optimum[intel]) (5.5.0) Requirement already satisfied: zope.event in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gevent->neural-compressor>=1.7->optimum[intel]) (4.5.0) Requirement already satisfied: greenlet<2.0,>=0.4.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gevent->neural-compressor>=1.7->optimum[intel]) (1.1.0) Requirement already satisfied: zope.interface in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gevent->neural-compressor>=1.7->optimum[intel]) (5.4.0) Requirement already satisfied: future in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (0.18.2) Requirement already satisfied: cloudpickle in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (1.6.0) Requirement already satisfied: networkx>=2.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (2.5) Requirement already satisfied: scipy in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (1.5.3) Requirement already satisfied: py4j in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from hyperopt->neural-compressor>=1.7->optimum[intel]) (0.10.7) Requirement already satisfied: wcwidth in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from prettytable->neural-compressor>=1.7->optimum[intel]) (0.2.5) Requirement already satisfied: contextlib2>=0.5.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from schema->neural-compressor>=1.7->optimum[intel]) (0.6.0.post1) Requirement already satisfied: threadpoolctl>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from scikit-learn->neural-compressor>=1.7->optimum[intel]) (2.1.0) Requirement already satisfied: pyOpenSSL>=20.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (20.0.1) Requirement already satisfied: pypng>=0.0.20 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (0.0.21) Requirement already satisfied: kubernetes<13.0.0,>=12.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (12.0.1) Requirement already satisfied: rsa<5.0.0,>=4.7 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (4.7.2) Requirement already satisfied: boto3<2.0.0,==1.16.34 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (1.16.34) Requirement already satisfied: Pint<0.17.0,>=0.16.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (0.16.1) Requirement already satisfied: GitPython>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (3.1.18) Requirement already satisfied: backoff<2.0.0,>=1.10.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (1.11.1) Requirement already satisfied: ipython>=5.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (7.16.1) Requirement already satisfied: docker<5.0.0,>=4.4.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sigopt->neural-compressor>=1.7->optimum[intel]) (4.4.4) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3<2.0.0,==1.16.34->sigopt->neural-compressor>=1.7->optimum[intel]) (0.10.0) Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3<2.0.0,==1.16.34->sigopt->neural-compressor>=1.7->optimum[intel]) (0.3.7) Requirement already satisfied: botocore<1.20.0,>=1.19.34 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from boto3<2.0.0,==1.16.34->sigopt->neural-compressor>=1.7->optimum[intel]) (1.19.63) Requirement already satisfied: pycparser in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from cffi>=1.12->cryptography->neural-compressor>=1.7->optimum[intel]) (2.20) Requirement already satisfied: websocket-client>=0.32.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from docker<5.0.0,>=4.4.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.58.0) Requirement already satisfied: gitdb<5,>=4.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from GitPython>=2.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (4.0.9) Requirement already satisfied: traitlets>=4.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (4.3.3) Requirement already satisfied: jedi>=0.10 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.17.2) Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (3.0.19) Requirement already satisfied: backcall in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.2.0) Requirement already satisfied: pygments in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (2.9.0) Requirement already satisfied: pexpect in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (4.8.0) Requirement already satisfied: decorator in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (5.0.9) Requirement already satisfied: pickleshare in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.7.5) Requirement already satisfied: MarkupSafe>=2.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Jinja2>=3.0->Flask->neural-compressor>=1.7->optimum[intel]) (2.0.1) Requirement already satisfied: google-auth>=1.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (1.30.2) Requirement already satisfied: requests-oauthlib in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (1.3.0) Requirement already satisfied: importlib-resources in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from Pint<0.17.0,>=0.16.0->sigopt->neural-compressor>=1.7->optimum[intel]) (5.4.0) Requirement already satisfied: python-engineio>=4.3.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from python-socketio>=5.0.2->Flask-SocketIO->neural-compressor>=1.7->optimum[intel]) (4.3.0) Requirement already satisfied: bidict>=0.21.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from python-socketio>=5.0.2->Flask-SocketIO->neural-compressor>=1.7->optimum[intel]) (0.21.4) Requirement already satisfied: pyasn1>=0.1.3 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from rsa<5.0.0,>=4.7->sigopt->neural-compressor>=1.7->optimum[intel]) (0.4.8) Requirement already satisfied: smmap<6,>=3.0.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from gitdb<5,>=4.0.1->GitPython>=2.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (5.0.0) Requirement already satisfied: pyasn1-modules>=0.2.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from google-auth>=1.0.1->kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (0.2.8) Requirement already satisfied: cachetools<5.0,>=2.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from google-auth>=1.0.1->kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (4.2.2) Requirement already satisfied: parso<0.8.0,>=0.7.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from jedi>=0.10->ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.7.1) Requirement already satisfied: ipython-genutils in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from traitlets>=4.2->ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.2.0) Requirement already satisfied: ptyprocess>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from pexpect->ipython>=5.0.0->sigopt->neural-compressor>=1.7->optimum[intel]) (0.7.0) Requirement already satisfied: oauthlib>=3.0.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests-oauthlib->kubernetes<13.0.0,>=12.0.1->sigopt->neural-compressor>=1.7->optimum[intel]) (3.1.1) ``` --- **Cell:** ```python from datasets import load_dataset, load_metric ``` OR ```python import datasets ``` **Traceback:** ``` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-7-34fb7ba3338d> in <module> ----> 1 from datasets import load_dataset, load_metric ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/__init__.py in <module> 32 ) 33 ---> 34 from .arrow_dataset import Dataset, concatenate_datasets 35 from .arrow_reader import ArrowReader, ReadInstruction 36 from .arrow_writer import ArrowWriter ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/arrow_dataset.py in <module> 59 from . import config, utils 60 from .arrow_reader import ArrowReader ---> 61 from .arrow_writer import ArrowWriter, OptimizedTypedSequence 62 from .features import ClassLabel, Features, FeatureType, Sequence, Value, _ArrayXD, pandas_types_mapper 63 from .filesystems import extract_path_from_uri, is_remote_filesystem ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/arrow_writer.py in <module> 26 27 from . import config, utils ---> 28 from .features import ( 29 Features, 30 ImageExtensionType, ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/features/__init__.py in <module> 1 # flake8: noqa ----> 2 from .audio import Audio 3 from .features import * 4 from .features import ( 5 _ArrayXD, ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/features/audio.py in <module> 5 import pyarrow as pa 6 ----> 7 from ..utils.streaming_download_manager import xopen 8 9 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/utils/streaming_download_manager.py in <module> 16 17 from .. import config ---> 18 from ..filesystems import COMPRESSION_FILESYSTEMS 19 from .download_manager import DownloadConfig, map_nested 20 from .file_utils import ( ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/filesystems/__init__.py in <module> 11 12 if _has_s3fs: ---> 13 from .s3filesystem import S3FileSystem # noqa: F401 14 15 COMPRESSION_FILESYSTEMS: List[compression.BaseCompressedFileFileSystem] = [ ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/filesystems/s3filesystem.py in <module> ----> 1 import s3fs 2 3 4 class S3FileSystem(s3fs.S3FileSystem): 5 """ ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/s3fs/__init__.py in <module> ----> 1 from .core import S3FileSystem, S3File 2 from .mapping import S3Map 3 4 from ._version import get_versions 5 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/s3fs/core.py in <module> 12 from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper 13 ---> 14 import aiobotocore 15 import botocore 16 import aiobotocore.session ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/__init__.py in <module> ----> 1 from .session import get_session, AioSession 2 3 __all__ = ['get_session', 'AioSession'] 4 __version__ = '1.3.0' ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/session.py in <module> 4 from botocore import retryhandler, translate 5 from botocore.exceptions import PartialCredentialsError ----> 6 from .client import AioClientCreator, AioBaseClient 7 from .hooks import AioHierarchicalEmitter 8 from .parsers import AioResponseParserFactory ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/client.py in <module> 11 from .args import AioClientArgsCreator 12 from .utils import AioS3RegionRedirector ---> 13 from . import waiter 14 15 history_recorder = get_global_history_recorder() ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/aiobotocore/waiter.py in <module> 4 from botocore.exceptions import ClientError 5 from botocore.waiter import WaiterModel # noqa: F401, lgtm[py/unused-import] ----> 6 from botocore.waiter import Waiter, xform_name, logger, WaiterError, \ 7 NormalizedOperationMethod as _NormalizedOperationMethod, is_valid_waiter_error 8 from botocore.docs.docstring import WaiterDocstring ImportError: cannot import name 'is_valid_waiter_error' ``` Please let me know if there's anything else I can add to post. [1]: https://github.com/huggingface/notebooks/blob/master/examples/text_classification_quantization_inc.ipynb
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3554/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3554/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5380
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5380/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5380/comments
https://api.github.com/repos/huggingface/datasets/issues/5380/events
https://github.com/huggingface/datasets/issues/5380
1,504,404,043
I_kwDODunzps5Zq2JL
5,380
Improve dataset `.skip()` speed in streaming mode
{ "avatar_url": "https://avatars.githubusercontent.com/u/173537?v=4", "events_url": "https://api.github.com/users/versae/events{/privacy}", "followers_url": "https://api.github.com/users/versae/followers", "following_url": "https://api.github.com/users/versae/following{/other_user}", "gists_url": "https://api.github.com/users/versae/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/versae", "id": 173537, "login": "versae", "node_id": "MDQ6VXNlcjE3MzUzNw==", "organizations_url": "https://api.github.com/users/versae/orgs", "received_events_url": "https://api.github.com/users/versae/received_events", "repos_url": "https://api.github.com/users/versae/repos", "site_admin": false, "starred_url": "https://api.github.com/users/versae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/versae/subscriptions", "type": "User", "url": "https://api.github.com/users/versae" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "BDE59C", "default": fals...
open
false
null
[]
null
[ "Hi! I agree `skip` can be inefficient to use in the current state.\r\n\r\nTo make it fast, we could use \"statistics\" stored in Parquet metadata and read only the chunks needed to form a dataset. \r\n\r\nAnd thanks to the \"datasets-server\" project, which aims to store the Parquet versions of the Hub datasets (o...
2022-12-20T11:25:23Z
2023-03-08T10:47:12Z
null
CONTRIBUTOR
null
null
null
### Feature request Add extra information to the `dataset_infos.json` file to include the number of samples/examples in each shard, for example in a new field `num_examples` alongside `num_bytes`. The `.skip()` function could use this information to ignore the download of a shard when in streaming mode, which AFAICT it should speed up the skipping process. ### Motivation When resuming from a checkpoint after a crashed run, using `dataset.skip()` is very convenient to recover the exact state of the data and to not train again over the same examples (assuming same seed, no shuffling). However, I have noticed that for audio datasets in streaming mode this is very costly in terms of time, as shards need to be downloaded every time before skipping the right number of examples. ### Your contribution I took a look already at the code, but it seems a change like this is way deeper than I am able to manage, as it touches the library in several parts. I could give it a try but might need some guidance on the internals.
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/5380/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5380/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2610
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2610/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2610/comments
https://api.github.com/repos/huggingface/datasets/issues/2610/events
https://github.com/huggingface/datasets/pull/2610
939,899,829
MDExOlB1bGxSZXF1ZXN0Njg2MDUwMzI5
2,610
Add missing WikiANN language tags
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
{ "closed_at": "2021-07-21T15:36:49Z", "closed_issues": 29, "created_at": "2021-06-08T18:48:33Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-08-05T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/6", "id": 6836458, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "open_issues": 0, "state": "closed", "title": "1.10", "updated_at": "2021-07-21T15:36:49Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/6" }
[]
2021-07-08T14:08:01Z
2021-07-12T14:12:16Z
2021-07-08T15:44:04Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2610.diff", "html_url": "https://github.com/huggingface/datasets/pull/2610", "merged_at": "2021-07-08T15:44:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/2610.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2610" }
Add missing language tags for WikiANN datasets.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2610/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2610/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6212
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6212/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6212/comments
https://api.github.com/repos/huggingface/datasets/issues/6212/events
https://github.com/huggingface/datasets/issues/6212
1,880,399,516
I_kwDODunzps5wFJ6c
6,212
Tilde (~) is not supported for data_files
{ "avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4", "events_url": "https://api.github.com/users/exs-avianello/events{/privacy}", "followers_url": "https://api.github.com/users/exs-avianello/followers", "following_url": "https://api.github.com/users/exs-avianello/following{/other_user}", "gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/exs-avianello", "id": 128361578, "login": "exs-avianello", "node_id": "U_kgDOB6akag", "organizations_url": "https://api.github.com/users/exs-avianello/orgs", "received_events_url": "https://api.github.com/users/exs-avianello/received_events", "repos_url": "https://api.github.com/users/exs-avianello/repos", "site_admin": false, "starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions", "type": "User", "url": "https://api.github.com/users/exs-avianello" }
[]
open
false
null
[]
null
[ "Hi @exs-avianello, is it really needed? Note you can alternatively use `pathlib.Path` among others as it follows:\r\n\r\n```python\r\nimport datasets\r\nfrom pathlib import Path\r\n\r\n# save a parquet file at ~/path/to/data.parquet\r\n\r\ndata_files = Path.home() / \"path/to/data.parquet\"\r\ndataset = datasets.l...
2023-09-04T14:23:49Z
2023-09-05T08:28:39Z
null
NONE
null
null
null
### Describe the bug Attempting to `load_dataset` from a path starting with `~` (as a shorthand for the user's home directory) seems not to be fully working - at least as far as the `parquet` dataset builder is concerned. (the same file can be loaded correctly if providing its absolute path instead) I think that this is very similar to https://github.com/huggingface/datasets/issues/5757, but for `data_files` rather than `data_dir` ### Steps to reproduce the bug ```python import datasets # save a parquet file at ~/path/to/data.parquet data_files = "~/path/to/data.parquet" dataset = datasets.load_dataset("parquet", data_files=data_files) ``` ``` Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 12671.61it/s] Extracting data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 22671.91it/s] Generating train split: 0 examples [00:00, ? examples/s] Traceback (most recent call last): File ".venv/lib/python3.11/site-packages/datasets/builder.py", line 1949, in _prepare_split_single num_examples, num_bytes = writer.finalize() ^^^^^^^^^^^^^^^^^ File ".venv/lib/python3.11/site-packages/datasets/arrow_writer.py", line 598, in finalize raise SchemaInferenceError("Please pass `features` or at least one example when writing data") datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data The above exception was the direct cause of the following exception: Traceback (most recent call last): File ".venv/lib/python3.11/site-packages/datasets/load.py", line 2133, in load_dataset builder_instance.download_and_prepare( File ".venv/lib/python3.11/site-packages/datasets/builder.py", line 954, in download_and_prepare self._download_and_prepare( File ".venv/lib/python3.11/site-packages/datasets/builder.py", line 1049, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File ".venv/lib/python3.11/site-packages/datasets/builder.py", line 1813, in _prepare_split for job_id, done, content in self._prepare_split_single( File ".venv/lib/python3.11/site-packages/datasets/builder.py", line 1958, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior Can use `~` shorthand in paths when loading local (parquet) datasets. ### Environment info `datasets 2.14.3`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6212/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6212/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4841
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4841/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4841/comments
https://api.github.com/repos/huggingface/datasets/issues/4841/events
https://github.com/huggingface/datasets/pull/4841
1,337,401,243
PR_kwDODunzps49Gf0I
4,841
Update ted_talks_iwslt license to include ND
{ "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cakiki", "id": 3664563, "login": "cakiki", "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "organizations_url": "https://api.github.com/users/cakiki/orgs", "received_events_url": "https://api.github.com/users/cakiki/received_events", "repos_url": "https://api.github.com/users/cakiki/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "type": "User", "url": "https://api.github.com/users/cakiki" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-12T16:14:52Z
2022-08-14T11:15:22Z
2022-08-14T11:00:22Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4841.diff", "html_url": "https://github.com/huggingface/datasets/pull/4841", "merged_at": "2022-08-14T11:00:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/4841.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4841" }
Excerpt from the paper's abstract: "Aside from its cultural and social relevance, this content, which is published under the Creative Commons BY-NC-ND license, also represents a precious language resource for the machine translation research community"
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4841/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4841/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5492/comments
https://api.github.com/repos/huggingface/datasets/issues/5492/events
https://github.com/huggingface/datasets/issues/5492
1,566,604,216
I_kwDODunzps5dYHu4
5,492
Push_to_hub in a pull request
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true...
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nateraw", "id": 32437151, "login": "nateraw", "node_id": "MDQ6VXNlcjMyNDM3MTUx", "organizations_url": "https://api.github.com/users/nateraw/orgs", "received_events_url": "https://api.github.com/users/nateraw/received_events", "repos_url": "https://api.github.com/users/nateraw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "type": "User", "url": "https://api.github.com/users/nateraw" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists...
null
[ "Assigned to myself and will get to it in the next week, but if someone finds this issue annoying and wants to submit a PR before I do, just ping me here and I'll reassign :). ", "I would like to be assigned to this issue, @nateraw . #self-assign" ]
2023-02-01T18:32:14Z
2023-10-16T13:30:48Z
2023-10-16T13:30:48Z
MEMBER
null
null
null
Right now `ds.push_to_hub()` can push a dataset on `main` or on a new branch with `branch=`, but there is no way to open a pull request. Even passing `branch=refs/pr/x` doesn't seem to work: it tries to create a branch with that name cc @nateraw It should be possible to tweak the use of `huggingface_hub` in `push_to_hub` to make it open a PR or push to an existing PR
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5492/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5492/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4711
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4711/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4711/comments
https://api.github.com/repos/huggingface/datasets/issues/4711/events
https://github.com/huggingface/datasets/issues/4711
1,309,138,570
I_kwDODunzps5OB96K
4,711
Document how to create a dataset loading script for audio/vision
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
[]
null
[ "I'm closing this issue as both the Audio and Image sections now have a \"Create dataset\" page that contains the info about writing the loading script version of a dataset." ]
2022-07-19T08:03:40Z
2023-07-25T16:07:52Z
2023-07-25T16:07:52Z
MEMBER
null
null
null
Currently, in our docs for Audio/Vision/Text, we explain how to: - Load data - Process data However we only explain how to *Create a dataset loading script* for text data. I think it would be useful that we add the same for Audio/Vision as these have some specificities different from Text. See, for example: - #4697 - and comment there: https://github.com/huggingface/datasets/issues/4697#issuecomment-1191502492 CC: @stevhliu
{ "+1": 4, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/4711/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4711/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1937/comments
https://api.github.com/repos/huggingface/datasets/issues/1937/events
https://github.com/huggingface/datasets/issues/1937
815,163,943
MDU6SXNzdWU4MTUxNjM5NDM=
1,937
CommonGen dataset page shows an error OSError: [Errno 28] No space left on device
{ "avatar_url": "https://avatars.githubusercontent.com/u/10104354?v=4", "events_url": "https://api.github.com/users/yuchenlin/events{/privacy}", "followers_url": "https://api.github.com/users/yuchenlin/followers", "following_url": "https://api.github.com/users/yuchenlin/following{/other_user}", "gists_url": "https://api.github.com/users/yuchenlin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yuchenlin", "id": 10104354, "login": "yuchenlin", "node_id": "MDQ6VXNlcjEwMTA0MzU0", "organizations_url": "https://api.github.com/users/yuchenlin/orgs", "received_events_url": "https://api.github.com/users/yuchenlin/received_events", "repos_url": "https://api.github.com/users/yuchenlin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yuchenlin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuchenlin/subscriptions", "type": "User", "url": "https://api.github.com/users/yuchenlin" }
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
closed
false
null
[]
null
[ "Facing the same issue for [Squad](https://huggingface.co/datasets/viewer/?dataset=squad) and [TriviaQA](https://huggingface.co/datasets/viewer/?dataset=trivia_qa) datasets as well.", "We just fixed the issue, thanks for reporting !" ]
2021-02-24T06:47:33Z
2021-02-26T11:10:06Z
2021-02-26T11:10:06Z
CONTRIBUTOR
null
null
null
The page of the CommonGen data https://huggingface.co/datasets/viewer/?dataset=common_gen shows ![image](https://user-images.githubusercontent.com/10104354/108959311-1865e600-7629-11eb-868c-cf4cb27034ea.png)
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1937/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1937/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5624
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5624/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5624/comments
https://api.github.com/repos/huggingface/datasets/issues/5624/events
https://github.com/huggingface/datasets/issues/5624
1,617,400,192
I_kwDODunzps5gZ5GA
5,624
glue datasets returning -1 for test split
{ "avatar_url": "https://avatars.githubusercontent.com/u/8939967?v=4", "events_url": "https://api.github.com/users/lithafnium/events{/privacy}", "followers_url": "https://api.github.com/users/lithafnium/followers", "following_url": "https://api.github.com/users/lithafnium/following{/other_user}", "gists_url": "https://api.github.com/users/lithafnium/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lithafnium", "id": 8939967, "login": "lithafnium", "node_id": "MDQ6VXNlcjg5Mzk5Njc=", "organizations_url": "https://api.github.com/users/lithafnium/orgs", "received_events_url": "https://api.github.com/users/lithafnium/received_events", "repos_url": "https://api.github.com/users/lithafnium/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lithafnium/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lithafnium/subscriptions", "type": "User", "url": "https://api.github.com/users/lithafnium" }
[]
closed
false
null
[]
null
[ "Hi @lithafnium, thanks for reporting.\r\n\r\nPlease note that you can use the \"Community\" tab in the corresponding dataset page to start any discussion: https://huggingface.co/datasets/glue/discussions\r\n\r\nIndeed this issue was already raised there (https://huggingface.co/datasets/glue/discussions/5) and answ...
2023-03-09T14:47:18Z
2023-03-09T16:49:29Z
2023-03-09T16:49:29Z
NONE
null
null
null
### Describe the bug Downloading any dataset from GLUE has -1 as class labels for test split. Train and validation have regular 0/1 class labels. This is also present in the dataset card online. ### Steps to reproduce the bug ``` dataset = load_dataset("glue", "sst2") for d in dataset: # prints out -1 print(d["label"] ``` ### Expected behavior Expected behavior should be 0/1 instead of -1. ### Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - PyArrow version: 8.0.0 - Pandas version: 1.5.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5624/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5624/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5870/comments
https://api.github.com/repos/huggingface/datasets/issues/5870/events
https://github.com/huggingface/datasets/issues/5870
1,712,156,282
I_kwDODunzps5mDW56
5,870
Behaviour difference between datasets.map and IterableDatasets.map
{ "avatar_url": "https://avatars.githubusercontent.com/u/30209072?v=4", "events_url": "https://api.github.com/users/llStringll/events{/privacy}", "followers_url": "https://api.github.com/users/llStringll/followers", "following_url": "https://api.github.com/users/llStringll/following{/other_user}", "gists_url": "https://api.github.com/users/llStringll/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/llStringll", "id": 30209072, "login": "llStringll", "node_id": "MDQ6VXNlcjMwMjA5MDcy", "organizations_url": "https://api.github.com/users/llStringll/orgs", "received_events_url": "https://api.github.com/users/llStringll/received_events", "repos_url": "https://api.github.com/users/llStringll/repos", "site_admin": false, "starred_url": "https://api.github.com/users/llStringll/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/llStringll/subscriptions", "type": "User", "url": "https://api.github.com/users/llStringll" }
[]
open
false
null
[]
null
[ "PS - some work is definitely needed for 'special cases' docs, not explanations, just usages of 'functions' under mixture of special cases, like a combination of custom databuilder + iterable dataset for large size + dynamic .map() application." ]
2023-05-16T14:32:57Z
2023-05-16T14:36:05Z
null
NONE
null
null
null
### Describe the bug All the examples in all the docs mentioned throughout huggingface datasets correspond to datasets object, and not IterableDatasets object. At one point of time, they might have been in sync, but the code for datasets version >=2.9.0 is very different as compared to the docs. I basically need to .map() a transform on images in an iterable dataset, which was made using a custom databuilder config. This works very good in map-styles datasets, but the .map() fails in IterableDatasets, show behvaiour as such: "pixel_values" key not found, KeyError in examples object/dict passed into transform function for map, which works fine with map style, even as batch. In iterable style, the object/dict passed into map() paramter callable function is completely different as what is mentioned in all examples. Please look into this. Thank you My databuilder class is inherited as such: def _info(self): print ("Config: ",self.config.__dict__.keys()) return datasets.DatasetInfo( description=_DESCRIPTION, features=datasets.Features( { "labels": datasets.Sequence(datasets.Value("uint16")), # "labels_name": datasets.Value("string"), # "pixel_values": datasets.Array3D(shape=(3, 1280, 960), dtype="float32"), "pixel_values": datasets.Array3D(shape=(1280, 960, 3), dtype="uint8"), "image_s3_path": datasets.Value("string"), } ), supervised_keys=None, homepage="none", citation="", ) def _split_generators(self, dl_manager): records_train = list(db.mini_set.find({'split':'train'},{'image_s3_path':1, 'ocwen_template_name':1}))[:10000] records_val = list(db.mini_set.find({'split':'val'},{'image_s3_path':1, 'ocwen_template_name':1}))[:1000] # print (len(records),self.config.num_shards) # shard_size_train = len(records_train)//self.config.num_shards # sharded_records_train = [records_train[i:i+shard_size_train] for i in range(0,len(records_train),shard_size_train)] # shard_size_val = len(records_val)//self.config.num_shards # sharded_records_val = [records_val[i:i+shard_size_val] for i in range(0,len(records_val),shard_size_val)] return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={"records":records_train} # passing list of records, for sharding to take over ), datasets.SplitGenerator( name=datasets.Split.VALIDATION, gen_kwargs={"records":records_val} # passing list of records, for sharding to take over ), ] def _generate_examples(self, records): # print ("Generating examples for [{}] shards".format(len(shards))) # initiate_db_connection() # records = list(db.mini_set.find({'split':split},{'image_s3_path':1, 'ocwen_template_name':1}))[:10] id_ = 0 # for records in shards: for i,rec in enumerate(records): img_local_path = fetch_file(rec['image_s3_path'],self.config.buffer_dir) # t = self.config.processor(Image.open(img_local_path), random_padding=True, return_tensors="np").pixel_values.squeeze() # print (t.shape, type(t),type(t[0][0][0])) # sys.exit() pvs = np.array(Image.open(img_local_path).resize((1280,960))) # image object is wxh, so resize as per that, numpy array of it is hxwxc, transposing to cxwxh # pvs = self.config.processor(Image.open(img_local_path), random_padding=True, return_tensors="np").pixel_values.astype(np.float16).squeeze() # print (type(pvs[0][0][0])) lblids = self.config.processor.tokenizer('<s_class>'+rec['ocwen_template_name']+'</s_class>'+'</s>', add_special_tokens=False, padding=False, truncation=False, return_tensors="np")["input_ids"].squeeze(0) # take padding later, as per batch collating # print (len(lblids),type(lblids[0])) # print (type(pvs),pvs.shape,type(pvs[0][0][0]), type(lblids)) yield id_, {"labels":lblids,"pixel_values":pvs,"image_s3_path":rec['image_s3_path']} id_+=1 os.remove(img_local_path) and I load it inside my trainer script as such `ds = load_dataset("/tmp/DonutDS/dataset/", split="train", streaming=True) # iterable dataset, where .map() falls` or also as `ds = load_from_disk('/tmp/DonutDS/dataset/') #map style dataset` Thank you to the team for having such a great library, and for this bug fix in advance! ### Steps to reproduce the bug Above config can allow one to reproduce the said bug ### Expected behavior .map() should show some consistency b/w map-style and iterable-style datasets, or atleast the docs should address iterable-style datasets behaviour and examples. I honestly do not figure the use of such docs. ### Environment info datasets==2.9.0 transformers==4.26.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5870/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5870/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2156
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2156/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2156/comments
https://api.github.com/repos/huggingface/datasets/issues/2156/events
https://github.com/huggingface/datasets/pull/2156
847,198,295
MDExOlB1bGxSZXF1ZXN0NjA2MjI5MTky
2,156
User permissions
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
[]
closed
false
null
[]
null
[]
2021-03-31T19:33:48Z
2021-03-31T19:34:24Z
2021-03-31T19:34:24Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2156.diff", "html_url": "https://github.com/huggingface/datasets/pull/2156", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2156.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2156" }
Updated user permissions based on running user's umask. Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2156/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2156/timeline
null
null
true