url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
48
51
id
int64
600M
3.09B
node_id
stringlengths
18
24
number
int64
2
7.59k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
1 value
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
listlengths
0
30
created_at
timestamp[ns, tz=UTC]date
2020-04-14 18:18:51
2025-05-27 13:46:05
updated_at
timestamp[ns, tz=UTC]date
2020-04-29 09:23:05
2025-06-09 22:00:16
closed_at
timestamp[ns, tz=UTC]date
2020-04-29 09:23:05
2025-06-06 16:12:36
author_association
stringclasses
4 values
type
float64
active_lock_reason
float64
sub_issues_summary
dict
body
stringlengths
0
228k
closed_by
dict
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
draft
float64
pull_request
null
time_to_close_hours
float64
0.01
28.8k
__index_level_0__
int64
18
7.53k
https://api.github.com/repos/huggingface/datasets/issues/7588
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7588/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7588/comments
https://api.github.com/repos/huggingface/datasets/issues/7588/events
https://github.com/huggingface/datasets/issues/7588
3,094,012,025
I_kwDODunzps64auB5
7,588
ValueError: Invalid pattern: '**' can only be an entire path component [Colab]
{ "avatar_url": "https://avatars.githubusercontent.com/u/43061081?v=4", "events_url": "https://api.github.com/users/wkambale/events{/privacy}", "followers_url": "https://api.github.com/users/wkambale/followers", "following_url": "https://api.github.com/users/wkambale/following{/other_user}", "gists_url": "https://api.github.com/users/wkambale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wkambale", "id": 43061081, "login": "wkambale", "node_id": "MDQ6VXNlcjQzMDYxMDgx", "organizations_url": "https://api.github.com/users/wkambale/orgs", "received_events_url": "https://api.github.com/users/wkambale/received_events", "repos_url": "https://api.github.com/users/wkambale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wkambale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wkambale/subscriptions", "type": "User", "url": "https://api.github.com/users/wkambale", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Could you please run the following code snippet in your environment and share the exact output? This will help check for any compatibility issues within the env itself. \n\n```\nimport datasets\nimport huggingface_hub\nimport fsspec\n\nprint(\"datasets version:\", datasets.__version__)\nprint(\"huggingface_hub version:\", huggingface_hub.__version__)\nprint(\"fsspec version:\", fsspec.__version__)\n```", "```bash\ndatasets version: 2.14.4\nhuggingface_hub version: 0.31.4\nfsspec version: 2025.3.2\n```", "Version 2.14.4 is not the latest version available, in fact it is from August 08, 2023 (you can check here: https://pypi.org/project/datasets/#history)\n\nUse pip install datasets==3.6.0 to install a more recent version (from May 7, 2025)\n\nI also had the same problem with Colab, after updating to the latest version it was solved.\n\nI hope it helps", "thank you @CleitonOERocha. it sure did help.\n\nupdating `datasets` to v3.6.0 and keeping `fsspec` on v2025.3.2 eliminates the issue.", "Very helpful, thank you!" ]
2025-05-27T13:46:05Z
2025-05-30T13:22:52Z
2025-05-30T01:26:30Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug I have a dataset on HF [here](https://huggingface.co/datasets/kambale/luganda-english-parallel-corpus) that i've previously used to train a translation model [here](https://huggingface.co/kambale/pearl-11m-translate). now i changed a few hyperparameters to increase number of tokens for the model, increase Transformer layers, and all however, when i try to load the dataset, this error keeps coming up.. i have tried everything.. i have re-written the code a hundred times, and this keep coming up ### Steps to reproduce the bug Imports: ```bash !pip install datasets huggingface_hub fsspec ``` Python code: ```python from datasets import load_dataset HF_DATASET_NAME = "kambale/luganda-english-parallel-corpus" # Load the dataset try: if not HF_DATASET_NAME or HF_DATASET_NAME == "YOUR_HF_DATASET_NAME": raise ValueError( "Please provide a valid Hugging Face dataset name." ) dataset = load_dataset(HF_DATASET_NAME) # Omitted code as the error happens on the line above except ValueError as ve: print(f"Configuration Error: {ve}") raise except Exception as e: print(f"An error occurred while loading the dataset '{HF_DATASET_NAME}': {e}") raise e ``` now, i have tried going through this [issue](https://github.com/huggingface/datasets/issues/6737) and nothing helps ### Expected behavior loading the dataset successfully and perform splits (train, test, validation) ### Environment info from the imports, i do not install specific versions of these libraries, so the latest or available version is installed * `datasets` version: latest * `Platform`: Google Colab * `Hardware`: NVIDIA A100 GPU * `Python` version: latest * `huggingface_hub` version: latest * `fsspec` version: latest
{ "avatar_url": "https://avatars.githubusercontent.com/u/43061081?v=4", "events_url": "https://api.github.com/users/wkambale/events{/privacy}", "followers_url": "https://api.github.com/users/wkambale/followers", "following_url": "https://api.github.com/users/wkambale/following{/other_user}", "gists_url": "https://api.github.com/users/wkambale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wkambale", "id": 43061081, "login": "wkambale", "node_id": "MDQ6VXNlcjQzMDYxMDgx", "organizations_url": "https://api.github.com/users/wkambale/orgs", "received_events_url": "https://api.github.com/users/wkambale/received_events", "repos_url": "https://api.github.com/users/wkambale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wkambale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wkambale/subscriptions", "type": "User", "url": "https://api.github.com/users/wkambale", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7588/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7588/timeline
null
completed
null
null
59.673611
18
https://api.github.com/repos/huggingface/datasets/issues/7583
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7583/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7583/comments
https://api.github.com/repos/huggingface/datasets/issues/7583/events
https://github.com/huggingface/datasets/issues/7583
3,088,987,757
I_kwDODunzps64HjZt
7,583
load_dataset type stubs reject List[str] for split parameter, but runtime supports it
{ "avatar_url": "https://avatars.githubusercontent.com/u/25069969?v=4", "events_url": "https://api.github.com/users/hierr/events{/privacy}", "followers_url": "https://api.github.com/users/hierr/followers", "following_url": "https://api.github.com/users/hierr/following{/other_user}", "gists_url": "https://api.github.com/users/hierr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hierr", "id": 25069969, "login": "hierr", "node_id": "MDQ6VXNlcjI1MDY5OTY5", "organizations_url": "https://api.github.com/users/hierr/orgs", "received_events_url": "https://api.github.com/users/hierr/received_events", "repos_url": "https://api.github.com/users/hierr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hierr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hierr/subscriptions", "type": "User", "url": "https://api.github.com/users/hierr", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2025-05-25T02:33:18Z
2025-05-26T18:29:58Z
2025-05-26T18:29:58Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug The [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) method accepts a `List[str]` as the split parameter at runtime, however, the current type stubs restrict the split parameter to `Union[str, Split, None]`. This causes type checkers like Pylance to raise `reportArgumentType` errors when passing a list of strings, even though it works as intended at runtime. ### Steps to reproduce the bug 1. Use load_dataset with multiple splits e.g.: ``` from datasets import load_dataset ds_train, ds_val, ds_test = load_dataset( "Silly-Machine/TuPyE-Dataset", "binary", split=["train[:75%]", "train[75%:]", "test"] ) ``` 2. Observe that code executes correctly at runtime and Pylance raises `Argument of type "List[str]" cannot be assigned to parameter "split" of type "str | Split | None"` ### Expected behavior The type stubs for [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) should accept `Union[str, Split, List[str], None]` or more specific overloads for the split parameter to correctly represent runtime behavior. ### Environment info - `datasets` version: 3.6.0 - Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39 - Python version: 3.12.7 - `huggingface_hub` version: 0.32.0 - PyArrow version: 20.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2025.3.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7583/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7583/timeline
null
completed
null
null
39.944444
23
https://api.github.com/repos/huggingface/datasets/issues/7577
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7577/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7577/comments
https://api.github.com/repos/huggingface/datasets/issues/7577/events
https://github.com/huggingface/datasets/issues/7577
3,080,833,740
I_kwDODunzps63ocrM
7,577
arrow_schema is not compatible with list
{ "avatar_url": "https://avatars.githubusercontent.com/u/164412025?v=4", "events_url": "https://api.github.com/users/jonathanshen-upwork/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanshen-upwork/followers", "following_url": "https://api.github.com/users/jonathanshen-upwork/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanshen-upwork/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanshen-upwork", "id": 164412025, "login": "jonathanshen-upwork", "node_id": "U_kgDOCcy6eQ", "organizations_url": "https://api.github.com/users/jonathanshen-upwork/orgs", "received_events_url": "https://api.github.com/users/jonathanshen-upwork/received_events", "repos_url": "https://api.github.com/users/jonathanshen-upwork/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanshen-upwork/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanshen-upwork/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanshen-upwork", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Thanks for reporting, I'll look into it", "Actually it looks like you just forgot parenthesis:\n\n```diff\n- f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})\n+ f = datasets.Features({'x': list([datasets.Value(dtype='int32')])})\n```\n\nor simply using the `[ ]` syntax:\n\n```python\nf = datasets.Features({'x':[datasets.Value(dtype='int32')]})\n```\n\nI'm closing this issue if you don't mind", "Ah is that what the syntax is? I don't think I was able to find an actual example of it so I assumed it was in the same way that you specify types eg. `list[int]`. This is good to know, thanks." ]
2025-05-21T16:37:01Z
2025-05-26T18:49:51Z
2025-05-26T18:32:55Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug ``` import datasets f = datasets.Features({'x': list[datasets.Value(dtype='int32')]}) f.arrow_schema Traceback (most recent call last): File "datasets/features/features.py", line 1826, in arrow_schema return pa.schema(self.type).with_metadata({"huggingface": json.dumps(hf_metadata)}) ^^^^^^^^^ File "datasets/features/features.py", line 1815, in type return get_nested_type(self) ^^^^^^^^^^^^^^^^^^^^^ File "datasets/features/features.py", line 1252, in get_nested_type return pa.struct( ^^^^^^^^^^ File "pyarrow/types.pxi", line 5406, in pyarrow.lib.struct File "pyarrow/types.pxi", line 3890, in pyarrow.lib.field File "pyarrow/types.pxi", line 5918, in pyarrow.lib.ensure_type TypeError: DataType expected, got <class 'list'> ``` The following works ``` f = datasets.Features({'x': datasets.LargeList(datasets.Value(dtype='int32'))}) ``` ### Expected behavior according to https://github.com/huggingface/datasets/blob/458f45a22c3cc9aea5f442f6f519333dcfeae9b9/src/datasets/features/features.py#L1765 python list should be a valid type specification for features ### Environment info - `datasets` version: 3.5.1 - Platform: macOS-15.5-arm64-arm-64bit - Python version: 3.12.9 - `huggingface_hub` version: 0.30.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7577/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7577/timeline
null
completed
null
null
121.931667
28
https://api.github.com/repos/huggingface/datasets/issues/7561
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7561/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7561/comments
https://api.github.com/repos/huggingface/datasets/issues/7561/events
https://github.com/huggingface/datasets/issues/7561
3,046,302,653
I_kwDODunzps61kuO9
7,561
NotImplementedError: <class 'datasets.iterable_dataset.RepeatExamplesIterable'> doesn't implement num_shards yet
{ "avatar_url": "https://avatars.githubusercontent.com/u/32219669?v=4", "events_url": "https://api.github.com/users/cyanic-selkie/events{/privacy}", "followers_url": "https://api.github.com/users/cyanic-selkie/followers", "following_url": "https://api.github.com/users/cyanic-selkie/following{/other_user}", "gists_url": "https://api.github.com/users/cyanic-selkie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cyanic-selkie", "id": 32219669, "login": "cyanic-selkie", "node_id": "MDQ6VXNlcjMyMjE5NjY5", "organizations_url": "https://api.github.com/users/cyanic-selkie/orgs", "received_events_url": "https://api.github.com/users/cyanic-selkie/received_events", "repos_url": "https://api.github.com/users/cyanic-selkie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cyanic-selkie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyanic-selkie/subscriptions", "type": "User", "url": "https://api.github.com/users/cyanic-selkie", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2025-05-07T15:05:42Z
2025-06-05T12:41:30Z
2025-06-05T12:41:30Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When using `.repeat()` on an `IterableDataset`, this error gets thrown. There is [this thread](https://discuss.huggingface.co/t/making-an-infinite-iterabledataset/146192/5) that seems to imply the fix is trivial, but I don't know anything about this codebase, so I'm opening this issue rather than attempting to open a PR. ### Steps to reproduce the bug 1. Create an `IterableDataset`. 2. Call `.repeat(None)` on it. 3. Wrap it in a pytorch `DataLoader` 4. Iterate over it. ### Expected behavior This should work normally. ### Environment info datasets: 3.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7561/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7561/timeline
null
completed
null
null
693.596667
44
https://api.github.com/repos/huggingface/datasets/issues/7554
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7554/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7554/comments
https://api.github.com/repos/huggingface/datasets/issues/7554/events
https://github.com/huggingface/datasets/issues/7554
3,043,089,844
I_kwDODunzps61Yd20
7,554
datasets downloads and generates all splits, even though a single split is requested (for dataset with loading script)
{ "avatar_url": "https://avatars.githubusercontent.com/u/50171988?v=4", "events_url": "https://api.github.com/users/sei-eschwartz/events{/privacy}", "followers_url": "https://api.github.com/users/sei-eschwartz/followers", "following_url": "https://api.github.com/users/sei-eschwartz/following{/other_user}", "gists_url": "https://api.github.com/users/sei-eschwartz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sei-eschwartz", "id": 50171988, "login": "sei-eschwartz", "node_id": "MDQ6VXNlcjUwMTcxOTg4", "organizations_url": "https://api.github.com/users/sei-eschwartz/orgs", "received_events_url": "https://api.github.com/users/sei-eschwartz/received_events", "repos_url": "https://api.github.com/users/sei-eschwartz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sei-eschwartz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sei-eschwartz/subscriptions", "type": "User", "url": "https://api.github.com/users/sei-eschwartz", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! there has been some effort on allowing to download only a subset of splits in https://github.com/huggingface/datasets/pull/6832 but no one has been continuing this work so far. This would be a welcomed contribution though\n\nAlso note that loading script are often unoptimized, and we recommend using datasets in standard formats like Parquet instead.\n\nBtw there is a CLI tool to convert a loading script to parquet:\n\n```\ndatasets-cli convert_to_parquet <dataset-name> --trust_remote_code\n```", "Closing in favor of #6832 " ]
2025-05-06T14:43:38Z
2025-05-07T14:53:45Z
2025-05-07T14:53:44Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug `datasets` downloads and generates all splits, even though a single split is requested. [This](https://huggingface.co/datasets/jordiae/exebench) is the dataset in question. It uses a loading script. I am not 100% sure that this is a bug, because maybe with loading scripts `datasets` must actually process all the splits? But I thought loading scripts were designed to avoid this. ### Steps to reproduce the bug See [this notebook](https://colab.research.google.com/drive/14kcXp_hgcdj-kIzK0bCG6taE-CLZPVvq?usp=sharing) Or: ```python from datasets import load_dataset dataset = load_dataset('jordiae/exebench', split='test_synth', trust_remote_code=True) ``` ### Expected behavior I expected only the `test_synth` split to be downloaded and processed. ### Environment info - `datasets` version: 3.5.1 - Platform: Linux-6.1.123+-x86_64-with-glibc2.35 - Python version: 3.11.12 - `huggingface_hub` version: 0.30.2 - PyArrow version: 18.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2025.3.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/50171988?v=4", "events_url": "https://api.github.com/users/sei-eschwartz/events{/privacy}", "followers_url": "https://api.github.com/users/sei-eschwartz/followers", "following_url": "https://api.github.com/users/sei-eschwartz/following{/other_user}", "gists_url": "https://api.github.com/users/sei-eschwartz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sei-eschwartz", "id": 50171988, "login": "sei-eschwartz", "node_id": "MDQ6VXNlcjUwMTcxOTg4", "organizations_url": "https://api.github.com/users/sei-eschwartz/orgs", "received_events_url": "https://api.github.com/users/sei-eschwartz/received_events", "repos_url": "https://api.github.com/users/sei-eschwartz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sei-eschwartz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sei-eschwartz/subscriptions", "type": "User", "url": "https://api.github.com/users/sei-eschwartz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7554/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7554/timeline
null
duplicate
null
null
24.168333
50
https://api.github.com/repos/huggingface/datasets/issues/7546
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7546/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7546/comments
https://api.github.com/repos/huggingface/datasets/issues/7546/events
https://github.com/huggingface/datasets/issues/7546
3,034,018,298
I_kwDODunzps6013H6
7,546
Large memory use when loading large datasets to a ZFS pool
{ "avatar_url": "https://avatars.githubusercontent.com/u/6875946?v=4", "events_url": "https://api.github.com/users/FredHaa/events{/privacy}", "followers_url": "https://api.github.com/users/FredHaa/followers", "following_url": "https://api.github.com/users/FredHaa/following{/other_user}", "gists_url": "https://api.github.com/users/FredHaa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FredHaa", "id": 6875946, "login": "FredHaa", "node_id": "MDQ6VXNlcjY4NzU5NDY=", "organizations_url": "https://api.github.com/users/FredHaa/orgs", "received_events_url": "https://api.github.com/users/FredHaa/received_events", "repos_url": "https://api.github.com/users/FredHaa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FredHaa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FredHaa/subscriptions", "type": "User", "url": "https://api.github.com/users/FredHaa", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! datasets are memory mapped from disk, so they don't fill out your RAM. Not sure what's the source of your memory issue.\n\nWhat kind of system are you using ? and what kind of disk ?", "Well, the fact of the matter is that my RAM is getting filled out by running the given example, as shown in [this video](https://streamable.com/usb0ql).\n\nMy system is a GPU server running Ubuntu. The disk is a SATA SSD attached to the server using a backplane. It is formatted with ZFS, mounted in /cache, and my HF_HOME is set to /cache/hf\n\nI really need this fixed, so I am more than willing to test out various suggestions you might have, or write a PR if we can figure out what is going on.", "I'm not super familiar with ZFS, but it looks like it loads the data in memory when the files are memory mapped, which is an issue.\n\nMaybe it's a caching mechanism ? Since `datasets` accesses every memory mapped file to read a small part (the metadata of the arrow record batches), maybe ZFS brings the whole files in memory for quicker subsequent reads. This is an antipattern when it comes to lazy loading datasets of that size though", "This is the answer.\n\nI tried changing my HF_HOME to an NFS share, and no RAM is then consumed loading the dataset.\n\nI will try to see if I can find a way to configure the ZFS pool to not cache the files (disabling the ARC/primary cache didn't work), and if I do write the solution in this issue. If I can't I guess I have to reformat my cache drive." ]
2025-05-01T14:43:47Z
2025-05-13T13:30:09Z
2025-05-13T13:29:53Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When I load large parquet based datasets from the hub like `MLCommons/peoples_speech` using `load_dataset`, all my memory (500GB) is used and isn't released after loading, meaning that the process is terminated by the kernel if I try to load an additional dataset. This makes it impossible to train models using multiple large datasets. ### Steps to reproduce the bug `uv run --with datasets==3.5.1 python` ```python from datasets import load_dataset load_dataset('MLCommons/peoples_speech', 'clean') load_dataset('mozilla-foundation/common_voice_17_0', 'en') ``` ### Expected behavior I would expect that a lot less than 500GB of RAM would be required to load the dataset, or at least that the RAM usage would be cleared as soon as the dataset is loaded (and thus reside as a memory mapped file) such that other datasets can be loaded. ### Environment info I am currently using the latest datasets==3.5.1 but I have had the same problem with multiple other versions.
{ "avatar_url": "https://avatars.githubusercontent.com/u/6875946?v=4", "events_url": "https://api.github.com/users/FredHaa/events{/privacy}", "followers_url": "https://api.github.com/users/FredHaa/followers", "following_url": "https://api.github.com/users/FredHaa/following{/other_user}", "gists_url": "https://api.github.com/users/FredHaa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FredHaa", "id": 6875946, "login": "FredHaa", "node_id": "MDQ6VXNlcjY4NzU5NDY=", "organizations_url": "https://api.github.com/users/FredHaa/orgs", "received_events_url": "https://api.github.com/users/FredHaa/received_events", "repos_url": "https://api.github.com/users/FredHaa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FredHaa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FredHaa/subscriptions", "type": "User", "url": "https://api.github.com/users/FredHaa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7546/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7546/timeline
null
completed
null
null
286.768333
58
https://api.github.com/repos/huggingface/datasets/issues/7543
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7543/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7543/comments
https://api.github.com/repos/huggingface/datasets/issues/7543/events
https://github.com/huggingface/datasets/issues/7543
3,026,867,706
I_kwDODunzps60alX6
7,543
The memory-disk mapping failure issue of the map function(resolved, but there are some suggestions.)
{ "avatar_url": "https://avatars.githubusercontent.com/u/76415358?v=4", "events_url": "https://api.github.com/users/jxma20/events{/privacy}", "followers_url": "https://api.github.com/users/jxma20/followers", "following_url": "https://api.github.com/users/jxma20/following{/other_user}", "gists_url": "https://api.github.com/users/jxma20/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxma20", "id": 76415358, "login": "jxma20", "node_id": "MDQ6VXNlcjc2NDE1MzU4", "organizations_url": "https://api.github.com/users/jxma20/orgs", "received_events_url": "https://api.github.com/users/jxma20/received_events", "repos_url": "https://api.github.com/users/jxma20/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxma20/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxma20/subscriptions", "type": "User", "url": "https://api.github.com/users/jxma20", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2025-04-29T03:04:59Z
2025-04-30T02:22:17Z
2025-04-30T02:22:17Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug ## bug When the map function processes a large dataset, it temporarily stores the data in a cache file on the disk. After the data is stored, the memory occupied by it is released. Therefore, when using the map function to process a large-scale dataset, only a dataset space of the size of `writer_batch_size` will be occupied in memory. However, I found that the map function does not actually reduce memory usage when I used it. At first, I thought there was a bug in the program, causing a memory leak—meaning the memory was not released after the data was stored in the cache. But later, I used a Linux command to check for recently modified files during program execution and found that no new files were created or modified. This indicates that the program did not store the dataset in the disk cache. ## bug solved After modifying the parameters of the map function multiple times, I discovered the `cache_file_name` parameter. By changing it, the cache file can be stored in the specified directory. After making this change, I noticed that the cache file appeared. Initially, I found this quite incredible, but then I wondered if the cache file might have failed to be stored in a certain folder. This could be related to the fact that I don't have root privileges. So, I delved into the source code of the map function to find out where the cache file would be stored by default. Eventually, I found the function `def _get_cache_file_path(self, fingerprint):`, which automatically generates the storage path for the cache file. The output was as follows: `/tmp/hf_datasets-j5qco9ug/cache-f2830487643b9cc2.arrow`. My hypothesis was confirmed: the lack of root privileges indeed prevented the cache file from being stored, which in turn prevented the release of memory. Therefore, changing the storage location to a folder where I have write access resolved the issue. ### Steps to reproduce the bug my code `train_data = train_data.map(process_fun, remove_columns=['image_name', 'question_type', 'concern', 'question', 'candidate_answers', 'answer'])` ### Expected behavior Although my bug has been resolved, it still took me nearly a week to search for relevant information and debug the program. However, if a warning or error message about insufficient cache file write permissions could be provided during program execution, I might have been able to identify the cause more quickly. Therefore, I hope this aspect can be improved. I am documenting this bug here so that friends who encounter similar issues can solve their problems in a timely manner. ### Environment info python: 3.10.15 datasets: 3.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/76415358?v=4", "events_url": "https://api.github.com/users/jxma20/events{/privacy}", "followers_url": "https://api.github.com/users/jxma20/followers", "following_url": "https://api.github.com/users/jxma20/following{/other_user}", "gists_url": "https://api.github.com/users/jxma20/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxma20", "id": 76415358, "login": "jxma20", "node_id": "MDQ6VXNlcjc2NDE1MzU4", "organizations_url": "https://api.github.com/users/jxma20/orgs", "received_events_url": "https://api.github.com/users/jxma20/received_events", "repos_url": "https://api.github.com/users/jxma20/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxma20/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxma20/subscriptions", "type": "User", "url": "https://api.github.com/users/jxma20", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7543/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7543/timeline
null
completed
null
null
23.288333
61
https://api.github.com/repos/huggingface/datasets/issues/7538
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7538/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7538/comments
https://api.github.com/repos/huggingface/datasets/issues/7538/events
https://github.com/huggingface/datasets/issues/7538
3,023,280,056
I_kwDODunzps60M5e4
7,538
`IterableDataset` drops samples when resuming from a checkpoint
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Thanks for reporting ! I fixed the issue using RebatchedArrowExamplesIterable before the formatted iterable" ]
2025-04-27T19:34:49Z
2025-05-06T14:04:05Z
2025-05-06T14:03:42Z
COLLABORATOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
When resuming from a checkpoint, `IterableDataset` will drop samples if `num_shards % world_size == 0` and the underlying example supports `iter_arrow` and needs to be formatted. In that case, the `FormattedExamplesIterable` fetches a batch of samples from the child iterable's `iter_arrow` and yields them one by one (after formatting). However, the child increments the `shard_example_idx` counter (in its `iter_arrow`) before returning the batch for the whole batch size, which leads to a portion of samples being skipped if the iteration (of the parent iterable) is stopped mid-batch. Perhaps one way to avoid this would be by signalling the child iterable which samples (within the chunk) are processed by the parent and which are not, so that it can adjust the `shard_example_idx` counter accordingly. This would also mean the chunk needs to be sliced when resuming, but this is straightforward to implement. The following is a minimal reproducer of the bug: ```python from datasets import Dataset from datasets.distributed import split_dataset_by_node ds = Dataset.from_dict({"n": list(range(24))}) ds = ds.to_iterable_dataset(num_shards=4) world_size = 4 rank = 0 ds_rank = split_dataset_by_node(ds, rank, world_size) it = iter(ds_rank) examples = [] for idx, example in enumerate(it): examples.append(example) if idx == 2: state_dict = ds_rank.state_dict() break ds_rank.load_state_dict(state_dict) it_resumed = iter(ds_rank) examples_resumed = examples[:] for example in it: examples.append(example) for example in it_resumed: examples_resumed.append(example) print("ORIGINAL ITER EXAMPLES:", examples) print("RESUMED ITER EXAMPLES:", examples_resumed) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7538/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7538/timeline
null
completed
null
null
210.481389
66
https://api.github.com/repos/huggingface/datasets/issues/7536
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7536/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7536/comments
https://api.github.com/repos/huggingface/datasets/issues/7536/events
https://github.com/huggingface/datasets/issues/7536
3,018,425,549
I_kwDODunzps6z6YTN
7,536
[Errno 13] Permission denied: on `.incomplete` file
{ "avatar_url": "https://avatars.githubusercontent.com/u/1282383?v=4", "events_url": "https://api.github.com/users/ryan-clancy/events{/privacy}", "followers_url": "https://api.github.com/users/ryan-clancy/followers", "following_url": "https://api.github.com/users/ryan-clancy/following{/other_user}", "gists_url": "https://api.github.com/users/ryan-clancy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ryan-clancy", "id": 1282383, "login": "ryan-clancy", "node_id": "MDQ6VXNlcjEyODIzODM=", "organizations_url": "https://api.github.com/users/ryan-clancy/orgs", "received_events_url": "https://api.github.com/users/ryan-clancy/received_events", "repos_url": "https://api.github.com/users/ryan-clancy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ryan-clancy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ryan-clancy/subscriptions", "type": "User", "url": "https://api.github.com/users/ryan-clancy", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (using filelock for example)", "> It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (using filelock for example)\n\n@lhoestq is this something which can go in a 3.5.1 release?", "Yes for sure", "@lhoestq - can you take a look at https://github.com/huggingface/datasets/pull/7547/?" ]
2025-04-24T20:52:45Z
2025-05-06T13:05:01Z
2025-05-06T13:05:01Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When downloading a dataset, we frequently hit the below Permission Denied error. This looks to happen (at least) across datasets in HF, S3, and GCS. It looks like the `temp_file` being passed [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L412) can sometimes be created with `000` permissions leading to the permission denied error (the user running the code is still the owner of the file). Deleting that particular file and re-running the code with 0 changes will usually succeed. Is there some race condition happening with the [umask](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L416), which is process global, and the [file creation](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L404)? ``` _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .venv/lib/python3.12/site-packages/datasets/load.py:2084: in load_dataset builder_instance.download_and_prepare( .venv/lib/python3.12/site-packages/datasets/builder.py:925: in download_and_prepare self._download_and_prepare( .venv/lib/python3.12/site-packages/datasets/builder.py:1649: in _download_and_prepare super()._download_and_prepare( .venv/lib/python3.12/site-packages/datasets/builder.py:979: in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) .venv/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py:120: in _split_generators downloaded_files = dl_manager.download(files) .venv/lib/python3.12/site-packages/datasets/download/download_manager.py:159: in download downloaded_path_or_paths = map_nested( .venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:514: in map_nested _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) .venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:382: in _single_map_nested return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)] .venv/lib/python3.12/site-packages/datasets/download/download_manager.py:206: in _download_batched return thread_map( .venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:69: in thread_map return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) .venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:51: in _executor_map return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) .venv/lib/python3.12/site-packages/tqdm/std.py:1181: in __iter__ for obj in iterable: ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:619: in result_iterator yield _result_or_cancel(fs.pop()) ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:317: in _result_or_cancel return fut.result(timeout) ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:449: in result return self.__get_result() ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:401: in __get_result raise self._exception ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/thread.py:59: in run result = self.fn(*self.args, **self.kwargs) .venv/lib/python3.12/site-packages/datasets/download/download_manager.py:229: in _download_single out = cached_path(url_or_filename, download_config=download_config) .venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:206: in cached_path output_path = get_from_cache( .venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:412: in get_from_cache fsspec_get(url, temp_file, storage_options=storage_options, desc=download_desc, disable_tqdm=disable_tqdm) .venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:331: in fsspec_get fs.get_file(path, temp_file.name, callback=callback) .venv/lib/python3.12/site-packages/fsspec/asyn.py:118: in wrapper return sync(self.loop, func, *args, **kwargs) .venv/lib/python3.12/site-packages/fsspec/asyn.py:103: in sync raise return_result .venv/lib/python3.12/site-packages/fsspec/asyn.py:56: in _runner result[0] = await coro _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <s3fs.core.S3FileSystem object at 0x7f27c18b2e70> rpath = '<my-bucket>/<my-prefix>/img_1.jpg' lpath = '/home/runner/_work/_temp/hf_cache/downloads/6c97983efa4e24e534557724655df8247a0bd04326cdfc4a95b638c11e78222d.incomplete' callback = <datasets.utils.file_utils.TqdmCallback object at 0x7f27c00cdbe0> version_id = None, kwargs = {} _open_file = <function S3FileSystem._get_file.<locals>._open_file at 0x7f27628d1120> body = <StreamingBody at 0x7f276344fa80 for ClientResponse at 0x7f27c015fce0> content_length = 521923, failed_reads = 0, bytes_read = 0 async def _get_file( self, rpath, lpath, callback=_DEFAULT_CALLBACK, version_id=None, **kwargs ): if os.path.isdir(lpath): return bucket, key, vers = self.split_path(rpath) async def _open_file(range: int): kw = self.req_kw.copy() if range: kw["Range"] = f"bytes={range}-" resp = await self._call_s3( "get_object", Bucket=bucket, Key=key, **version_id_kw(version_id or vers), **kw, ) return resp["Body"], resp.get("ContentLength", None) body, content_length = await _open_file(range=0) callback.set_size(content_length) failed_reads = 0 bytes_read = 0 try: > with open(lpath, "wb") as f0: E PermissionError: [Errno 13] Permission denied: '/home/runner/_work/_temp/hf_cache/downloads/6c97983efa4e24e534557724655df8247a0bd04326cdfc4a95b638c11e78222d.incomplete' .venv/lib/python3.12/site-packages/s3fs/core.py:1355: PermissionError ``` ### Steps to reproduce the bug I believe this is a race condition and cannot reliably re-produce it, but it happens fairly frequently in our GitHub Actions tests and can also be re-produced (with lesser frequency) on cloud VMs. ### Expected behavior The dataset loads properly with no permission denied error. ### Environment info - `datasets` version: 3.5.0 - Platform: Linux-5.10.0-34-cloud-amd64-x86_64-with-glibc2.31 - Python version: 3.12.10 - `huggingface_hub` version: 0.30.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7536/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7536/timeline
null
completed
null
null
280.204444
68
https://api.github.com/repos/huggingface/datasets/issues/7530
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7530/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7530/comments
https://api.github.com/repos/huggingface/datasets/issues/7530/events
https://github.com/huggingface/datasets/issues/7530
3,007,452,499
I_kwDODunzps6zQhVT
7,530
How to solve "Spaces stuck in Building" problems
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghost", "id": 10137, "login": "ghost", "node_id": "MDQ6VXNlcjEwMTM3", "organizations_url": "https://api.github.com/users/ghost/orgs", "received_events_url": "https://api.github.com/users/ghost/received_events", "repos_url": "https://api.github.com/users/ghost/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "type": "User", "url": "https://api.github.com/users/ghost", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "I'm facing the same issue—Space stuck in \"Building\" even after restart and Factory rebuild. Any fix?\n", "> I'm facing the same issue—Space stuck in \"Building\" even after restart and Factory rebuild. Any fix?\n\nAlso see https://github.com/huggingface/huggingface_hub/issues/3019", "I'm facing the same issue. The build fails with the same error, and restarting won't help. Is there a fix or ETA? " ]
2025-04-21T03:08:38Z
2025-04-22T07:49:52Z
2025-04-22T07:49:52Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Public spaces may stuck in Building after restarting, error log as follows: build error Unexpected job error ERROR: failed to push spaces-registry.huggingface.tech/spaces/*:cpu-*-*: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-*: 401 Unauthorized ### Steps to reproduce the bug Restart space / Factory rebuild cannot avoid it ### Expected behavior Fix this problem ### Environment info no requirements.txt can still happen python gradio spaces
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7530/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7530/timeline
null
completed
null
null
28.687222
74
https://api.github.com/repos/huggingface/datasets/issues/7517
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7517/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7517/comments
https://api.github.com/repos/huggingface/datasets/issues/7517/events
https://github.com/huggingface/datasets/issues/7517
2,996,106,077
I_kwDODunzps6ylPNd
7,517
Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames
{ "avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4", "events_url": "https://api.github.com/users/giraffacarp/events{/privacy}", "followers_url": "https://api.github.com/users/giraffacarp/followers", "following_url": "https://api.github.com/users/giraffacarp/following{/other_user}", "gists_url": "https://api.github.com/users/giraffacarp/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/giraffacarp", "id": 73196164, "login": "giraffacarp", "node_id": "MDQ6VXNlcjczMTk2MTY0", "organizations_url": "https://api.github.com/users/giraffacarp/orgs", "received_events_url": "https://api.github.com/users/giraffacarp/received_events", "repos_url": "https://api.github.com/users/giraffacarp/repos", "site_admin": false, "starred_url": "https://api.github.com/users/giraffacarp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/giraffacarp/subscriptions", "type": "User", "url": "https://api.github.com/users/giraffacarp", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4", "events_url": "https://api.github.com/users/giraffacarp/events{/privacy}", "followers_url": "https://api.github.com/users/giraffacarp/followers", "following_url": "https://api.github.com/users/giraffacarp/following{/other_user}", "gists_url": "https://api.github.com/users/giraffacarp/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/giraffacarp", "id": 73196164, "login": "giraffacarp", "node_id": "MDQ6VXNlcjczMTk2MTY0", "organizations_url": "https://api.github.com/users/giraffacarp/orgs", "received_events_url": "https://api.github.com/users/giraffacarp/received_events", "repos_url": "https://api.github.com/users/giraffacarp/repos", "site_admin": false, "starred_url": "https://api.github.com/users/giraffacarp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/giraffacarp/subscriptions", "type": "User", "url": "https://api.github.com/users/giraffacarp", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4", "events_url": "https://api.github.com/users/giraffacarp/events{/privacy}", "followers_url": "https://api.github.com/users/giraffacarp/followers", "following_url": "https://api.github.com/users/giraffacarp/following{/other_user}", "gists_url": "https://api.github.com/users/giraffacarp/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/giraffacarp", "id": 73196164, "login": "giraffacarp", "node_id": "MDQ6VXNlcjczMTk2MTY0", "organizations_url": "https://api.github.com/users/giraffacarp/orgs", "received_events_url": "https://api.github.com/users/giraffacarp/received_events", "repos_url": "https://api.github.com/users/giraffacarp/repos", "site_admin": false, "starred_url": "https://api.github.com/users/giraffacarp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/giraffacarp/subscriptions", "type": "User", "url": "https://api.github.com/users/giraffacarp", "user_view_type": "public" } ]
null
[ "Hi ! The `Image()` type accepts either\n- a `bytes` object containing the image bytes\n- a `str` object containing the image path\n- a `PIL.Image` object\n\nbut it doesn't support `bytearray`, maybe you can convert to `bytes` beforehand ?", "Hi @lhoestq, \nconverting to bytes is certainly possible and would work around the error. However, the core issue is that `Dataset` and `IterableDataset` behave differently with the features.\n\nI’d be happy to work on a fix for this issue.", "I see, that's an issue indeed. Feel free to ping me if I can help with reviews or any guidance\n\nIf it can help, the code that takes a Spark DataFrame and iterates on the rows for `IterableDataset` is here: \n\nhttps://github.com/huggingface/datasets/blob/6a96bf313085d7538a999b929a550e14e1d406c9/src/datasets/packaged_modules/spark/spark.py#L49-L53", "#self-assign" ]
2025-04-15T11:29:17Z
2025-05-07T14:17:30Z
2025-05-07T14:17:30Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When using `IterableDataset.from_spark()` with a Spark DataFrame containing image data, the `Image` feature class fails to properly process this data type, causing an `AttributeError: 'bytearray' object has no attribute 'get'` ### Steps to reproduce the bug 1. Create a Spark DataFrame with a column containing image data as bytearray objects 2. Define a Feature schema with an Image feature 3. Create an IterableDataset using `IterableDataset.from_spark()` 4. Attempt to iterate through the dataset ``` from pyspark.sql import SparkSession from datasets import Dataset, IterableDataset, Features, Image, Value # initialize spark spark = SparkSession.builder.appName("MinimalRepro").getOrCreate() # create spark dataframe data = [(0, open("image.png", "rb").read())] df = spark.createDataFrame(data, "idx: int, image: binary") # convert to dataset features = Features({"idx": Value("int64"), "image": Image()}) ds = Dataset.from_spark(df, features=features) ds_iter = IterableDataset.from_spark(df, features=features) # iterate print(next(iter(ds))) print(next(iter(ds_iter))) ``` ### Expected behavior The features should work on `IterableDataset` the same way they work on `Dataset` ### Environment info - `datasets` version: 3.5.0 - Platform: macOS-15.3.2-arm64-arm-64bit - Python version: 3.12.7 - `huggingface_hub` version: 0.30.2 - PyArrow version: 18.1.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7517/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7517/timeline
null
completed
null
null
530.803611
87
https://api.github.com/repos/huggingface/datasets/issues/7516
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7516/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7516/comments
https://api.github.com/repos/huggingface/datasets/issues/7516/events
https://github.com/huggingface/datasets/issues/7516
2,995,780,283
I_kwDODunzps6yj_q7
7,516
unsloth/DeepSeek-R1-Distill-Qwen-32B server error
{ "avatar_url": "https://avatars.githubusercontent.com/u/164353862?v=4", "events_url": "https://api.github.com/users/Editor-1/events{/privacy}", "followers_url": "https://api.github.com/users/Editor-1/followers", "following_url": "https://api.github.com/users/Editor-1/following{/other_user}", "gists_url": "https://api.github.com/users/Editor-1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Editor-1", "id": 164353862, "login": "Editor-1", "node_id": "U_kgDOCcvXRg", "organizations_url": "https://api.github.com/users/Editor-1/orgs", "received_events_url": "https://api.github.com/users/Editor-1/received_events", "repos_url": "https://api.github.com/users/Editor-1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Editor-1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Editor-1/subscriptions", "type": "User", "url": "https://api.github.com/users/Editor-1", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2025-04-15T09:26:53Z
2025-04-15T09:57:26Z
2025-04-15T09:57:26Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug hfhubhttperror: 500 server error: internal server error for url: https://huggingface.co/api/models/unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit/commits/main (request id: root=1-67fe23fa-3a2150eb444c2a823c388579;de3aed68-c397-4da5-94d4-6565efd3b919) internal error - we're working hard to fix this as soon as possible! ### Steps to reproduce the bug unsloth/DeepSeek-R1-Distill-Qwen-32B server error ### Expected behavior Network repair ### Environment info The web side is also unavailable
{ "avatar_url": "https://avatars.githubusercontent.com/u/164353862?v=4", "events_url": "https://api.github.com/users/Editor-1/events{/privacy}", "followers_url": "https://api.github.com/users/Editor-1/followers", "following_url": "https://api.github.com/users/Editor-1/following{/other_user}", "gists_url": "https://api.github.com/users/Editor-1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Editor-1", "id": 164353862, "login": "Editor-1", "node_id": "U_kgDOCcvXRg", "organizations_url": "https://api.github.com/users/Editor-1/orgs", "received_events_url": "https://api.github.com/users/Editor-1/received_events", "repos_url": "https://api.github.com/users/Editor-1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Editor-1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Editor-1/subscriptions", "type": "User", "url": "https://api.github.com/users/Editor-1", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7516/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7516/timeline
null
completed
null
null
0.509167
88
https://api.github.com/repos/huggingface/datasets/issues/7515
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7515/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7515/comments
https://api.github.com/repos/huggingface/datasets/issues/7515/events
https://github.com/huggingface/datasets/issues/7515
2,995,082,418
I_kwDODunzps6yhVSy
7,515
`concatenate_datasets` does not preserve Pytorch format for IterableDataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/5140987?v=4", "events_url": "https://api.github.com/users/francescorubbo/events{/privacy}", "followers_url": "https://api.github.com/users/francescorubbo/followers", "following_url": "https://api.github.com/users/francescorubbo/following{/other_user}", "gists_url": "https://api.github.com/users/francescorubbo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/francescorubbo", "id": 5140987, "login": "francescorubbo", "node_id": "MDQ6VXNlcjUxNDA5ODc=", "organizations_url": "https://api.github.com/users/francescorubbo/orgs", "received_events_url": "https://api.github.com/users/francescorubbo/received_events", "repos_url": "https://api.github.com/users/francescorubbo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/francescorubbo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/francescorubbo/subscriptions", "type": "User", "url": "https://api.github.com/users/francescorubbo", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! Oh indeed it would be cool to return the same format in that case. Would you like to submit a PR ? The function that does the concatenation is here:\n\nhttps://github.com/huggingface/datasets/blob/90e5bf8a8599b625d6103ee5ac83b98269991141/src/datasets/iterable_dataset.py#L3375-L3380", "Thank you for the pointer, @lhoestq ! See #7522 " ]
2025-04-15T04:36:34Z
2025-05-19T15:07:38Z
2025-05-19T15:07:38Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When concatenating datasets with `concatenate_datasets`, I would expect the resulting combined dataset to be in the same format as the inputs (assuming it's consistent). This is indeed the behavior when combining `Dataset`, but not when combining `IterableDataset`. Specifically, when applying `concatenate_datasets` to a list of `IterableDataset` in Pytorch format (i.e. using `.with_format(Pytorch)`), the output `IterableDataset` is not in Pytorch format. ### Steps to reproduce the bug ``` import datasets ds = datasets.Dataset.from_dict({"a": [1,2,3]}) iterable_ds = ds.to_iterable_dataset() datasets.concatenate_datasets([ds.with_format("torch")]) # <- this preserves Pytorch format datasets.concatenate_datasets([iterable_ds.with_format("torch")]) # <- this does NOT preserves Pytorch format ``` ### Expected behavior Pytorch format should be preserved when combining IterableDataset in Pytorch format. ### Environment info datasets==3.5.0, Python 3.11.11, torch==2.2.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7515/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7515/timeline
null
completed
null
null
826.517778
89
https://api.github.com/repos/huggingface/datasets/issues/7588
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7588/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7588/comments
https://api.github.com/repos/huggingface/datasets/issues/7588/events
https://github.com/huggingface/datasets/issues/7588
3,094,012,025
I_kwDODunzps64auB5
7,588
ValueError: Invalid pattern: '**' can only be an entire path component [Colab]
{ "avatar_url": "https://avatars.githubusercontent.com/u/43061081?v=4", "events_url": "https://api.github.com/users/wkambale/events{/privacy}", "followers_url": "https://api.github.com/users/wkambale/followers", "following_url": "https://api.github.com/users/wkambale/following{/other_user}", "gists_url": "https://api.github.com/users/wkambale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wkambale", "id": 43061081, "login": "wkambale", "node_id": "MDQ6VXNlcjQzMDYxMDgx", "organizations_url": "https://api.github.com/users/wkambale/orgs", "received_events_url": "https://api.github.com/users/wkambale/received_events", "repos_url": "https://api.github.com/users/wkambale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wkambale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wkambale/subscriptions", "type": "User", "url": "https://api.github.com/users/wkambale", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Could you please run the following code snippet in your environment and share the exact output? This will help check for any compatibility issues within the env itself. \n\n```\nimport datasets\nimport huggingface_hub\nimport fsspec\n\nprint(\"datasets version:\", datasets.__version__)\nprint(\"huggingface_hub version:\", huggingface_hub.__version__)\nprint(\"fsspec version:\", fsspec.__version__)\n```", "```bash\ndatasets version: 2.14.4\nhuggingface_hub version: 0.31.4\nfsspec version: 2025.3.2\n```", "Version 2.14.4 is not the latest version available, in fact it is from August 08, 2023 (you can check here: https://pypi.org/project/datasets/#history)\n\nUse pip install datasets==3.6.0 to install a more recent version (from May 7, 2025)\n\nI also had the same problem with Colab, after updating to the latest version it was solved.\n\nI hope it helps", "thank you @CleitonOERocha. it sure did help.\n\nupdating `datasets` to v3.6.0 and keeping `fsspec` on v2025.3.2 eliminates the issue.", "Very helpful, thank you!" ]
2025-05-27T13:46:05Z
2025-05-30T13:22:52Z
2025-05-30T01:26:30Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug I have a dataset on HF [here](https://huggingface.co/datasets/kambale/luganda-english-parallel-corpus) that i've previously used to train a translation model [here](https://huggingface.co/kambale/pearl-11m-translate). now i changed a few hyperparameters to increase number of tokens for the model, increase Transformer layers, and all however, when i try to load the dataset, this error keeps coming up.. i have tried everything.. i have re-written the code a hundred times, and this keep coming up ### Steps to reproduce the bug Imports: ```bash !pip install datasets huggingface_hub fsspec ``` Python code: ```python from datasets import load_dataset HF_DATASET_NAME = "kambale/luganda-english-parallel-corpus" # Load the dataset try: if not HF_DATASET_NAME or HF_DATASET_NAME == "YOUR_HF_DATASET_NAME": raise ValueError( "Please provide a valid Hugging Face dataset name." ) dataset = load_dataset(HF_DATASET_NAME) # Omitted code as the error happens on the line above except ValueError as ve: print(f"Configuration Error: {ve}") raise except Exception as e: print(f"An error occurred while loading the dataset '{HF_DATASET_NAME}': {e}") raise e ``` now, i have tried going through this [issue](https://github.com/huggingface/datasets/issues/6737) and nothing helps ### Expected behavior loading the dataset successfully and perform splits (train, test, validation) ### Environment info from the imports, i do not install specific versions of these libraries, so the latest or available version is installed * `datasets` version: latest * `Platform`: Google Colab * `Hardware`: NVIDIA A100 GPU * `Python` version: latest * `huggingface_hub` version: latest * `fsspec` version: latest
{ "avatar_url": "https://avatars.githubusercontent.com/u/43061081?v=4", "events_url": "https://api.github.com/users/wkambale/events{/privacy}", "followers_url": "https://api.github.com/users/wkambale/followers", "following_url": "https://api.github.com/users/wkambale/following{/other_user}", "gists_url": "https://api.github.com/users/wkambale/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wkambale", "id": 43061081, "login": "wkambale", "node_id": "MDQ6VXNlcjQzMDYxMDgx", "organizations_url": "https://api.github.com/users/wkambale/orgs", "received_events_url": "https://api.github.com/users/wkambale/received_events", "repos_url": "https://api.github.com/users/wkambale/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wkambale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wkambale/subscriptions", "type": "User", "url": "https://api.github.com/users/wkambale", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7588/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7588/timeline
null
completed
null
null
59.673611
118
https://api.github.com/repos/huggingface/datasets/issues/7583
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7583/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7583/comments
https://api.github.com/repos/huggingface/datasets/issues/7583/events
https://github.com/huggingface/datasets/issues/7583
3,088,987,757
I_kwDODunzps64HjZt
7,583
load_dataset type stubs reject List[str] for split parameter, but runtime supports it
{ "avatar_url": "https://avatars.githubusercontent.com/u/25069969?v=4", "events_url": "https://api.github.com/users/hierr/events{/privacy}", "followers_url": "https://api.github.com/users/hierr/followers", "following_url": "https://api.github.com/users/hierr/following{/other_user}", "gists_url": "https://api.github.com/users/hierr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hierr", "id": 25069969, "login": "hierr", "node_id": "MDQ6VXNlcjI1MDY5OTY5", "organizations_url": "https://api.github.com/users/hierr/orgs", "received_events_url": "https://api.github.com/users/hierr/received_events", "repos_url": "https://api.github.com/users/hierr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hierr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hierr/subscriptions", "type": "User", "url": "https://api.github.com/users/hierr", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2025-05-25T02:33:18Z
2025-05-26T18:29:58Z
2025-05-26T18:29:58Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug The [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) method accepts a `List[str]` as the split parameter at runtime, however, the current type stubs restrict the split parameter to `Union[str, Split, None]`. This causes type checkers like Pylance to raise `reportArgumentType` errors when passing a list of strings, even though it works as intended at runtime. ### Steps to reproduce the bug 1. Use load_dataset with multiple splits e.g.: ``` from datasets import load_dataset ds_train, ds_val, ds_test = load_dataset( "Silly-Machine/TuPyE-Dataset", "binary", split=["train[:75%]", "train[75%:]", "test"] ) ``` 2. Observe that code executes correctly at runtime and Pylance raises `Argument of type "List[str]" cannot be assigned to parameter "split" of type "str | Split | None"` ### Expected behavior The type stubs for [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) should accept `Union[str, Split, List[str], None]` or more specific overloads for the split parameter to correctly represent runtime behavior. ### Environment info - `datasets` version: 3.6.0 - Platform: Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39 - Python version: 3.12.7 - `huggingface_hub` version: 0.32.0 - PyArrow version: 20.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2025.3.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7583/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7583/timeline
null
completed
null
null
39.944444
123
https://api.github.com/repos/huggingface/datasets/issues/7577
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7577/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7577/comments
https://api.github.com/repos/huggingface/datasets/issues/7577/events
https://github.com/huggingface/datasets/issues/7577
3,080,833,740
I_kwDODunzps63ocrM
7,577
arrow_schema is not compatible with list
{ "avatar_url": "https://avatars.githubusercontent.com/u/164412025?v=4", "events_url": "https://api.github.com/users/jonathanshen-upwork/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanshen-upwork/followers", "following_url": "https://api.github.com/users/jonathanshen-upwork/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanshen-upwork/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanshen-upwork", "id": 164412025, "login": "jonathanshen-upwork", "node_id": "U_kgDOCcy6eQ", "organizations_url": "https://api.github.com/users/jonathanshen-upwork/orgs", "received_events_url": "https://api.github.com/users/jonathanshen-upwork/received_events", "repos_url": "https://api.github.com/users/jonathanshen-upwork/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanshen-upwork/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanshen-upwork/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanshen-upwork", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Thanks for reporting, I'll look into it", "Actually it looks like you just forgot parenthesis:\n\n```diff\n- f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})\n+ f = datasets.Features({'x': list([datasets.Value(dtype='int32')])})\n```\n\nor simply using the `[ ]` syntax:\n\n```python\nf = datasets.Features({'x':[datasets.Value(dtype='int32')]})\n```\n\nI'm closing this issue if you don't mind", "Ah is that what the syntax is? I don't think I was able to find an actual example of it so I assumed it was in the same way that you specify types eg. `list[int]`. This is good to know, thanks." ]
2025-05-21T16:37:01Z
2025-05-26T18:49:51Z
2025-05-26T18:32:55Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug ``` import datasets f = datasets.Features({'x': list[datasets.Value(dtype='int32')]}) f.arrow_schema Traceback (most recent call last): File "datasets/features/features.py", line 1826, in arrow_schema return pa.schema(self.type).with_metadata({"huggingface": json.dumps(hf_metadata)}) ^^^^^^^^^ File "datasets/features/features.py", line 1815, in type return get_nested_type(self) ^^^^^^^^^^^^^^^^^^^^^ File "datasets/features/features.py", line 1252, in get_nested_type return pa.struct( ^^^^^^^^^^ File "pyarrow/types.pxi", line 5406, in pyarrow.lib.struct File "pyarrow/types.pxi", line 3890, in pyarrow.lib.field File "pyarrow/types.pxi", line 5918, in pyarrow.lib.ensure_type TypeError: DataType expected, got <class 'list'> ``` The following works ``` f = datasets.Features({'x': datasets.LargeList(datasets.Value(dtype='int32'))}) ``` ### Expected behavior according to https://github.com/huggingface/datasets/blob/458f45a22c3cc9aea5f442f6f519333dcfeae9b9/src/datasets/features/features.py#L1765 python list should be a valid type specification for features ### Environment info - `datasets` version: 3.5.1 - Platform: macOS-15.5-arm64-arm-64bit - Python version: 3.12.9 - `huggingface_hub` version: 0.30.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7577/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7577/timeline
null
completed
null
null
121.931667
128
https://api.github.com/repos/huggingface/datasets/issues/7561
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7561/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7561/comments
https://api.github.com/repos/huggingface/datasets/issues/7561/events
https://github.com/huggingface/datasets/issues/7561
3,046,302,653
I_kwDODunzps61kuO9
7,561
NotImplementedError: <class 'datasets.iterable_dataset.RepeatExamplesIterable'> doesn't implement num_shards yet
{ "avatar_url": "https://avatars.githubusercontent.com/u/32219669?v=4", "events_url": "https://api.github.com/users/cyanic-selkie/events{/privacy}", "followers_url": "https://api.github.com/users/cyanic-selkie/followers", "following_url": "https://api.github.com/users/cyanic-selkie/following{/other_user}", "gists_url": "https://api.github.com/users/cyanic-selkie/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cyanic-selkie", "id": 32219669, "login": "cyanic-selkie", "node_id": "MDQ6VXNlcjMyMjE5NjY5", "organizations_url": "https://api.github.com/users/cyanic-selkie/orgs", "received_events_url": "https://api.github.com/users/cyanic-selkie/received_events", "repos_url": "https://api.github.com/users/cyanic-selkie/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cyanic-selkie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyanic-selkie/subscriptions", "type": "User", "url": "https://api.github.com/users/cyanic-selkie", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2025-05-07T15:05:42Z
2025-06-05T12:41:30Z
2025-06-05T12:41:30Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When using `.repeat()` on an `IterableDataset`, this error gets thrown. There is [this thread](https://discuss.huggingface.co/t/making-an-infinite-iterabledataset/146192/5) that seems to imply the fix is trivial, but I don't know anything about this codebase, so I'm opening this issue rather than attempting to open a PR. ### Steps to reproduce the bug 1. Create an `IterableDataset`. 2. Call `.repeat(None)` on it. 3. Wrap it in a pytorch `DataLoader` 4. Iterate over it. ### Expected behavior This should work normally. ### Environment info datasets: 3.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7561/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7561/timeline
null
completed
null
null
693.596667
144
https://api.github.com/repos/huggingface/datasets/issues/7554
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7554/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7554/comments
https://api.github.com/repos/huggingface/datasets/issues/7554/events
https://github.com/huggingface/datasets/issues/7554
3,043,089,844
I_kwDODunzps61Yd20
7,554
datasets downloads and generates all splits, even though a single split is requested (for dataset with loading script)
{ "avatar_url": "https://avatars.githubusercontent.com/u/50171988?v=4", "events_url": "https://api.github.com/users/sei-eschwartz/events{/privacy}", "followers_url": "https://api.github.com/users/sei-eschwartz/followers", "following_url": "https://api.github.com/users/sei-eschwartz/following{/other_user}", "gists_url": "https://api.github.com/users/sei-eschwartz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sei-eschwartz", "id": 50171988, "login": "sei-eschwartz", "node_id": "MDQ6VXNlcjUwMTcxOTg4", "organizations_url": "https://api.github.com/users/sei-eschwartz/orgs", "received_events_url": "https://api.github.com/users/sei-eschwartz/received_events", "repos_url": "https://api.github.com/users/sei-eschwartz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sei-eschwartz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sei-eschwartz/subscriptions", "type": "User", "url": "https://api.github.com/users/sei-eschwartz", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! there has been some effort on allowing to download only a subset of splits in https://github.com/huggingface/datasets/pull/6832 but no one has been continuing this work so far. This would be a welcomed contribution though\n\nAlso note that loading script are often unoptimized, and we recommend using datasets in standard formats like Parquet instead.\n\nBtw there is a CLI tool to convert a loading script to parquet:\n\n```\ndatasets-cli convert_to_parquet <dataset-name> --trust_remote_code\n```", "Closing in favor of #6832 " ]
2025-05-06T14:43:38Z
2025-05-07T14:53:45Z
2025-05-07T14:53:44Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug `datasets` downloads and generates all splits, even though a single split is requested. [This](https://huggingface.co/datasets/jordiae/exebench) is the dataset in question. It uses a loading script. I am not 100% sure that this is a bug, because maybe with loading scripts `datasets` must actually process all the splits? But I thought loading scripts were designed to avoid this. ### Steps to reproduce the bug See [this notebook](https://colab.research.google.com/drive/14kcXp_hgcdj-kIzK0bCG6taE-CLZPVvq?usp=sharing) Or: ```python from datasets import load_dataset dataset = load_dataset('jordiae/exebench', split='test_synth', trust_remote_code=True) ``` ### Expected behavior I expected only the `test_synth` split to be downloaded and processed. ### Environment info - `datasets` version: 3.5.1 - Platform: Linux-6.1.123+-x86_64-with-glibc2.35 - Python version: 3.11.12 - `huggingface_hub` version: 0.30.2 - PyArrow version: 18.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2025.3.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/50171988?v=4", "events_url": "https://api.github.com/users/sei-eschwartz/events{/privacy}", "followers_url": "https://api.github.com/users/sei-eschwartz/followers", "following_url": "https://api.github.com/users/sei-eschwartz/following{/other_user}", "gists_url": "https://api.github.com/users/sei-eschwartz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sei-eschwartz", "id": 50171988, "login": "sei-eschwartz", "node_id": "MDQ6VXNlcjUwMTcxOTg4", "organizations_url": "https://api.github.com/users/sei-eschwartz/orgs", "received_events_url": "https://api.github.com/users/sei-eschwartz/received_events", "repos_url": "https://api.github.com/users/sei-eschwartz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sei-eschwartz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sei-eschwartz/subscriptions", "type": "User", "url": "https://api.github.com/users/sei-eschwartz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7554/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7554/timeline
null
duplicate
null
null
24.168333
150
https://api.github.com/repos/huggingface/datasets/issues/7546
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7546/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7546/comments
https://api.github.com/repos/huggingface/datasets/issues/7546/events
https://github.com/huggingface/datasets/issues/7546
3,034,018,298
I_kwDODunzps6013H6
7,546
Large memory use when loading large datasets to a ZFS pool
{ "avatar_url": "https://avatars.githubusercontent.com/u/6875946?v=4", "events_url": "https://api.github.com/users/FredHaa/events{/privacy}", "followers_url": "https://api.github.com/users/FredHaa/followers", "following_url": "https://api.github.com/users/FredHaa/following{/other_user}", "gists_url": "https://api.github.com/users/FredHaa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FredHaa", "id": 6875946, "login": "FredHaa", "node_id": "MDQ6VXNlcjY4NzU5NDY=", "organizations_url": "https://api.github.com/users/FredHaa/orgs", "received_events_url": "https://api.github.com/users/FredHaa/received_events", "repos_url": "https://api.github.com/users/FredHaa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FredHaa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FredHaa/subscriptions", "type": "User", "url": "https://api.github.com/users/FredHaa", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! datasets are memory mapped from disk, so they don't fill out your RAM. Not sure what's the source of your memory issue.\n\nWhat kind of system are you using ? and what kind of disk ?", "Well, the fact of the matter is that my RAM is getting filled out by running the given example, as shown in [this video](https://streamable.com/usb0ql).\n\nMy system is a GPU server running Ubuntu. The disk is a SATA SSD attached to the server using a backplane. It is formatted with ZFS, mounted in /cache, and my HF_HOME is set to /cache/hf\n\nI really need this fixed, so I am more than willing to test out various suggestions you might have, or write a PR if we can figure out what is going on.", "I'm not super familiar with ZFS, but it looks like it loads the data in memory when the files are memory mapped, which is an issue.\n\nMaybe it's a caching mechanism ? Since `datasets` accesses every memory mapped file to read a small part (the metadata of the arrow record batches), maybe ZFS brings the whole files in memory for quicker subsequent reads. This is an antipattern when it comes to lazy loading datasets of that size though", "This is the answer.\n\nI tried changing my HF_HOME to an NFS share, and no RAM is then consumed loading the dataset.\n\nI will try to see if I can find a way to configure the ZFS pool to not cache the files (disabling the ARC/primary cache didn't work), and if I do write the solution in this issue. If I can't I guess I have to reformat my cache drive." ]
2025-05-01T14:43:47Z
2025-05-13T13:30:09Z
2025-05-13T13:29:53Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When I load large parquet based datasets from the hub like `MLCommons/peoples_speech` using `load_dataset`, all my memory (500GB) is used and isn't released after loading, meaning that the process is terminated by the kernel if I try to load an additional dataset. This makes it impossible to train models using multiple large datasets. ### Steps to reproduce the bug `uv run --with datasets==3.5.1 python` ```python from datasets import load_dataset load_dataset('MLCommons/peoples_speech', 'clean') load_dataset('mozilla-foundation/common_voice_17_0', 'en') ``` ### Expected behavior I would expect that a lot less than 500GB of RAM would be required to load the dataset, or at least that the RAM usage would be cleared as soon as the dataset is loaded (and thus reside as a memory mapped file) such that other datasets can be loaded. ### Environment info I am currently using the latest datasets==3.5.1 but I have had the same problem with multiple other versions.
{ "avatar_url": "https://avatars.githubusercontent.com/u/6875946?v=4", "events_url": "https://api.github.com/users/FredHaa/events{/privacy}", "followers_url": "https://api.github.com/users/FredHaa/followers", "following_url": "https://api.github.com/users/FredHaa/following{/other_user}", "gists_url": "https://api.github.com/users/FredHaa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FredHaa", "id": 6875946, "login": "FredHaa", "node_id": "MDQ6VXNlcjY4NzU5NDY=", "organizations_url": "https://api.github.com/users/FredHaa/orgs", "received_events_url": "https://api.github.com/users/FredHaa/received_events", "repos_url": "https://api.github.com/users/FredHaa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FredHaa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FredHaa/subscriptions", "type": "User", "url": "https://api.github.com/users/FredHaa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7546/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7546/timeline
null
completed
null
null
286.768333
158
https://api.github.com/repos/huggingface/datasets/issues/7543
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7543/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7543/comments
https://api.github.com/repos/huggingface/datasets/issues/7543/events
https://github.com/huggingface/datasets/issues/7543
3,026,867,706
I_kwDODunzps60alX6
7,543
The memory-disk mapping failure issue of the map function(resolved, but there are some suggestions.)
{ "avatar_url": "https://avatars.githubusercontent.com/u/76415358?v=4", "events_url": "https://api.github.com/users/jxma20/events{/privacy}", "followers_url": "https://api.github.com/users/jxma20/followers", "following_url": "https://api.github.com/users/jxma20/following{/other_user}", "gists_url": "https://api.github.com/users/jxma20/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxma20", "id": 76415358, "login": "jxma20", "node_id": "MDQ6VXNlcjc2NDE1MzU4", "organizations_url": "https://api.github.com/users/jxma20/orgs", "received_events_url": "https://api.github.com/users/jxma20/received_events", "repos_url": "https://api.github.com/users/jxma20/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxma20/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxma20/subscriptions", "type": "User", "url": "https://api.github.com/users/jxma20", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2025-04-29T03:04:59Z
2025-04-30T02:22:17Z
2025-04-30T02:22:17Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug ## bug When the map function processes a large dataset, it temporarily stores the data in a cache file on the disk. After the data is stored, the memory occupied by it is released. Therefore, when using the map function to process a large-scale dataset, only a dataset space of the size of `writer_batch_size` will be occupied in memory. However, I found that the map function does not actually reduce memory usage when I used it. At first, I thought there was a bug in the program, causing a memory leak—meaning the memory was not released after the data was stored in the cache. But later, I used a Linux command to check for recently modified files during program execution and found that no new files were created or modified. This indicates that the program did not store the dataset in the disk cache. ## bug solved After modifying the parameters of the map function multiple times, I discovered the `cache_file_name` parameter. By changing it, the cache file can be stored in the specified directory. After making this change, I noticed that the cache file appeared. Initially, I found this quite incredible, but then I wondered if the cache file might have failed to be stored in a certain folder. This could be related to the fact that I don't have root privileges. So, I delved into the source code of the map function to find out where the cache file would be stored by default. Eventually, I found the function `def _get_cache_file_path(self, fingerprint):`, which automatically generates the storage path for the cache file. The output was as follows: `/tmp/hf_datasets-j5qco9ug/cache-f2830487643b9cc2.arrow`. My hypothesis was confirmed: the lack of root privileges indeed prevented the cache file from being stored, which in turn prevented the release of memory. Therefore, changing the storage location to a folder where I have write access resolved the issue. ### Steps to reproduce the bug my code `train_data = train_data.map(process_fun, remove_columns=['image_name', 'question_type', 'concern', 'question', 'candidate_answers', 'answer'])` ### Expected behavior Although my bug has been resolved, it still took me nearly a week to search for relevant information and debug the program. However, if a warning or error message about insufficient cache file write permissions could be provided during program execution, I might have been able to identify the cause more quickly. Therefore, I hope this aspect can be improved. I am documenting this bug here so that friends who encounter similar issues can solve their problems in a timely manner. ### Environment info python: 3.10.15 datasets: 3.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/76415358?v=4", "events_url": "https://api.github.com/users/jxma20/events{/privacy}", "followers_url": "https://api.github.com/users/jxma20/followers", "following_url": "https://api.github.com/users/jxma20/following{/other_user}", "gists_url": "https://api.github.com/users/jxma20/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jxma20", "id": 76415358, "login": "jxma20", "node_id": "MDQ6VXNlcjc2NDE1MzU4", "organizations_url": "https://api.github.com/users/jxma20/orgs", "received_events_url": "https://api.github.com/users/jxma20/received_events", "repos_url": "https://api.github.com/users/jxma20/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jxma20/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jxma20/subscriptions", "type": "User", "url": "https://api.github.com/users/jxma20", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7543/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7543/timeline
null
completed
null
null
23.288333
161
https://api.github.com/repos/huggingface/datasets/issues/7538
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7538/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7538/comments
https://api.github.com/repos/huggingface/datasets/issues/7538/events
https://github.com/huggingface/datasets/issues/7538
3,023,280,056
I_kwDODunzps60M5e4
7,538
`IterableDataset` drops samples when resuming from a checkpoint
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Thanks for reporting ! I fixed the issue using RebatchedArrowExamplesIterable before the formatted iterable" ]
2025-04-27T19:34:49Z
2025-05-06T14:04:05Z
2025-05-06T14:03:42Z
COLLABORATOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
When resuming from a checkpoint, `IterableDataset` will drop samples if `num_shards % world_size == 0` and the underlying example supports `iter_arrow` and needs to be formatted. In that case, the `FormattedExamplesIterable` fetches a batch of samples from the child iterable's `iter_arrow` and yields them one by one (after formatting). However, the child increments the `shard_example_idx` counter (in its `iter_arrow`) before returning the batch for the whole batch size, which leads to a portion of samples being skipped if the iteration (of the parent iterable) is stopped mid-batch. Perhaps one way to avoid this would be by signalling the child iterable which samples (within the chunk) are processed by the parent and which are not, so that it can adjust the `shard_example_idx` counter accordingly. This would also mean the chunk needs to be sliced when resuming, but this is straightforward to implement. The following is a minimal reproducer of the bug: ```python from datasets import Dataset from datasets.distributed import split_dataset_by_node ds = Dataset.from_dict({"n": list(range(24))}) ds = ds.to_iterable_dataset(num_shards=4) world_size = 4 rank = 0 ds_rank = split_dataset_by_node(ds, rank, world_size) it = iter(ds_rank) examples = [] for idx, example in enumerate(it): examples.append(example) if idx == 2: state_dict = ds_rank.state_dict() break ds_rank.load_state_dict(state_dict) it_resumed = iter(ds_rank) examples_resumed = examples[:] for example in it: examples.append(example) for example in it_resumed: examples_resumed.append(example) print("ORIGINAL ITER EXAMPLES:", examples) print("RESUMED ITER EXAMPLES:", examples_resumed) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7538/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7538/timeline
null
completed
null
null
210.481389
166
https://api.github.com/repos/huggingface/datasets/issues/7536
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7536/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7536/comments
https://api.github.com/repos/huggingface/datasets/issues/7536/events
https://github.com/huggingface/datasets/issues/7536
3,018,425,549
I_kwDODunzps6z6YTN
7,536
[Errno 13] Permission denied: on `.incomplete` file
{ "avatar_url": "https://avatars.githubusercontent.com/u/1282383?v=4", "events_url": "https://api.github.com/users/ryan-clancy/events{/privacy}", "followers_url": "https://api.github.com/users/ryan-clancy/followers", "following_url": "https://api.github.com/users/ryan-clancy/following{/other_user}", "gists_url": "https://api.github.com/users/ryan-clancy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ryan-clancy", "id": 1282383, "login": "ryan-clancy", "node_id": "MDQ6VXNlcjEyODIzODM=", "organizations_url": "https://api.github.com/users/ryan-clancy/orgs", "received_events_url": "https://api.github.com/users/ryan-clancy/received_events", "repos_url": "https://api.github.com/users/ryan-clancy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ryan-clancy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ryan-clancy/subscriptions", "type": "User", "url": "https://api.github.com/users/ryan-clancy", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (using filelock for example)", "> It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (using filelock for example)\n\n@lhoestq is this something which can go in a 3.5.1 release?", "Yes for sure", "@lhoestq - can you take a look at https://github.com/huggingface/datasets/pull/7547/?" ]
2025-04-24T20:52:45Z
2025-05-06T13:05:01Z
2025-05-06T13:05:01Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When downloading a dataset, we frequently hit the below Permission Denied error. This looks to happen (at least) across datasets in HF, S3, and GCS. It looks like the `temp_file` being passed [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L412) can sometimes be created with `000` permissions leading to the permission denied error (the user running the code is still the owner of the file). Deleting that particular file and re-running the code with 0 changes will usually succeed. Is there some race condition happening with the [umask](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L416), which is process global, and the [file creation](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L404)? ``` _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .venv/lib/python3.12/site-packages/datasets/load.py:2084: in load_dataset builder_instance.download_and_prepare( .venv/lib/python3.12/site-packages/datasets/builder.py:925: in download_and_prepare self._download_and_prepare( .venv/lib/python3.12/site-packages/datasets/builder.py:1649: in _download_and_prepare super()._download_and_prepare( .venv/lib/python3.12/site-packages/datasets/builder.py:979: in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) .venv/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py:120: in _split_generators downloaded_files = dl_manager.download(files) .venv/lib/python3.12/site-packages/datasets/download/download_manager.py:159: in download downloaded_path_or_paths = map_nested( .venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:514: in map_nested _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) .venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:382: in _single_map_nested return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)] .venv/lib/python3.12/site-packages/datasets/download/download_manager.py:206: in _download_batched return thread_map( .venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:69: in thread_map return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) .venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:51: in _executor_map return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) .venv/lib/python3.12/site-packages/tqdm/std.py:1181: in __iter__ for obj in iterable: ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:619: in result_iterator yield _result_or_cancel(fs.pop()) ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:317: in _result_or_cancel return fut.result(timeout) ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:449: in result return self.__get_result() ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:401: in __get_result raise self._exception ../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/thread.py:59: in run result = self.fn(*self.args, **self.kwargs) .venv/lib/python3.12/site-packages/datasets/download/download_manager.py:229: in _download_single out = cached_path(url_or_filename, download_config=download_config) .venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:206: in cached_path output_path = get_from_cache( .venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:412: in get_from_cache fsspec_get(url, temp_file, storage_options=storage_options, desc=download_desc, disable_tqdm=disable_tqdm) .venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:331: in fsspec_get fs.get_file(path, temp_file.name, callback=callback) .venv/lib/python3.12/site-packages/fsspec/asyn.py:118: in wrapper return sync(self.loop, func, *args, **kwargs) .venv/lib/python3.12/site-packages/fsspec/asyn.py:103: in sync raise return_result .venv/lib/python3.12/site-packages/fsspec/asyn.py:56: in _runner result[0] = await coro _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <s3fs.core.S3FileSystem object at 0x7f27c18b2e70> rpath = '<my-bucket>/<my-prefix>/img_1.jpg' lpath = '/home/runner/_work/_temp/hf_cache/downloads/6c97983efa4e24e534557724655df8247a0bd04326cdfc4a95b638c11e78222d.incomplete' callback = <datasets.utils.file_utils.TqdmCallback object at 0x7f27c00cdbe0> version_id = None, kwargs = {} _open_file = <function S3FileSystem._get_file.<locals>._open_file at 0x7f27628d1120> body = <StreamingBody at 0x7f276344fa80 for ClientResponse at 0x7f27c015fce0> content_length = 521923, failed_reads = 0, bytes_read = 0 async def _get_file( self, rpath, lpath, callback=_DEFAULT_CALLBACK, version_id=None, **kwargs ): if os.path.isdir(lpath): return bucket, key, vers = self.split_path(rpath) async def _open_file(range: int): kw = self.req_kw.copy() if range: kw["Range"] = f"bytes={range}-" resp = await self._call_s3( "get_object", Bucket=bucket, Key=key, **version_id_kw(version_id or vers), **kw, ) return resp["Body"], resp.get("ContentLength", None) body, content_length = await _open_file(range=0) callback.set_size(content_length) failed_reads = 0 bytes_read = 0 try: > with open(lpath, "wb") as f0: E PermissionError: [Errno 13] Permission denied: '/home/runner/_work/_temp/hf_cache/downloads/6c97983efa4e24e534557724655df8247a0bd04326cdfc4a95b638c11e78222d.incomplete' .venv/lib/python3.12/site-packages/s3fs/core.py:1355: PermissionError ``` ### Steps to reproduce the bug I believe this is a race condition and cannot reliably re-produce it, but it happens fairly frequently in our GitHub Actions tests and can also be re-produced (with lesser frequency) on cloud VMs. ### Expected behavior The dataset loads properly with no permission denied error. ### Environment info - `datasets` version: 3.5.0 - Platform: Linux-5.10.0-34-cloud-amd64-x86_64-with-glibc2.31 - Python version: 3.12.10 - `huggingface_hub` version: 0.30.2 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7536/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7536/timeline
null
completed
null
null
280.204444
168
https://api.github.com/repos/huggingface/datasets/issues/7530
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7530/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7530/comments
https://api.github.com/repos/huggingface/datasets/issues/7530/events
https://github.com/huggingface/datasets/issues/7530
3,007,452,499
I_kwDODunzps6zQhVT
7,530
How to solve "Spaces stuck in Building" problems
{ "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghost", "id": 10137, "login": "ghost", "node_id": "MDQ6VXNlcjEwMTM3", "organizations_url": "https://api.github.com/users/ghost/orgs", "received_events_url": "https://api.github.com/users/ghost/received_events", "repos_url": "https://api.github.com/users/ghost/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "type": "User", "url": "https://api.github.com/users/ghost", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "I'm facing the same issue—Space stuck in \"Building\" even after restart and Factory rebuild. Any fix?\n", "> I'm facing the same issue—Space stuck in \"Building\" even after restart and Factory rebuild. Any fix?\n\nAlso see https://github.com/huggingface/huggingface_hub/issues/3019", "I'm facing the same issue. The build fails with the same error, and restarting won't help. Is there a fix or ETA? " ]
2025-04-21T03:08:38Z
2025-04-22T07:49:52Z
2025-04-22T07:49:52Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Public spaces may stuck in Building after restarting, error log as follows: build error Unexpected job error ERROR: failed to push spaces-registry.huggingface.tech/spaces/*:cpu-*-*: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-*: 401 Unauthorized ### Steps to reproduce the bug Restart space / Factory rebuild cannot avoid it ### Expected behavior Fix this problem ### Environment info no requirements.txt can still happen python gradio spaces
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7530/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7530/timeline
null
completed
null
null
28.687222
174
https://api.github.com/repos/huggingface/datasets/issues/7517
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7517/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7517/comments
https://api.github.com/repos/huggingface/datasets/issues/7517/events
https://github.com/huggingface/datasets/issues/7517
2,996,106,077
I_kwDODunzps6ylPNd
7,517
Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames
{ "avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4", "events_url": "https://api.github.com/users/giraffacarp/events{/privacy}", "followers_url": "https://api.github.com/users/giraffacarp/followers", "following_url": "https://api.github.com/users/giraffacarp/following{/other_user}", "gists_url": "https://api.github.com/users/giraffacarp/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/giraffacarp", "id": 73196164, "login": "giraffacarp", "node_id": "MDQ6VXNlcjczMTk2MTY0", "organizations_url": "https://api.github.com/users/giraffacarp/orgs", "received_events_url": "https://api.github.com/users/giraffacarp/received_events", "repos_url": "https://api.github.com/users/giraffacarp/repos", "site_admin": false, "starred_url": "https://api.github.com/users/giraffacarp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/giraffacarp/subscriptions", "type": "User", "url": "https://api.github.com/users/giraffacarp", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4", "events_url": "https://api.github.com/users/giraffacarp/events{/privacy}", "followers_url": "https://api.github.com/users/giraffacarp/followers", "following_url": "https://api.github.com/users/giraffacarp/following{/other_user}", "gists_url": "https://api.github.com/users/giraffacarp/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/giraffacarp", "id": 73196164, "login": "giraffacarp", "node_id": "MDQ6VXNlcjczMTk2MTY0", "organizations_url": "https://api.github.com/users/giraffacarp/orgs", "received_events_url": "https://api.github.com/users/giraffacarp/received_events", "repos_url": "https://api.github.com/users/giraffacarp/repos", "site_admin": false, "starred_url": "https://api.github.com/users/giraffacarp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/giraffacarp/subscriptions", "type": "User", "url": "https://api.github.com/users/giraffacarp", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4", "events_url": "https://api.github.com/users/giraffacarp/events{/privacy}", "followers_url": "https://api.github.com/users/giraffacarp/followers", "following_url": "https://api.github.com/users/giraffacarp/following{/other_user}", "gists_url": "https://api.github.com/users/giraffacarp/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/giraffacarp", "id": 73196164, "login": "giraffacarp", "node_id": "MDQ6VXNlcjczMTk2MTY0", "organizations_url": "https://api.github.com/users/giraffacarp/orgs", "received_events_url": "https://api.github.com/users/giraffacarp/received_events", "repos_url": "https://api.github.com/users/giraffacarp/repos", "site_admin": false, "starred_url": "https://api.github.com/users/giraffacarp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/giraffacarp/subscriptions", "type": "User", "url": "https://api.github.com/users/giraffacarp", "user_view_type": "public" } ]
null
[ "Hi ! The `Image()` type accepts either\n- a `bytes` object containing the image bytes\n- a `str` object containing the image path\n- a `PIL.Image` object\n\nbut it doesn't support `bytearray`, maybe you can convert to `bytes` beforehand ?", "Hi @lhoestq, \nconverting to bytes is certainly possible and would work around the error. However, the core issue is that `Dataset` and `IterableDataset` behave differently with the features.\n\nI’d be happy to work on a fix for this issue.", "I see, that's an issue indeed. Feel free to ping me if I can help with reviews or any guidance\n\nIf it can help, the code that takes a Spark DataFrame and iterates on the rows for `IterableDataset` is here: \n\nhttps://github.com/huggingface/datasets/blob/6a96bf313085d7538a999b929a550e14e1d406c9/src/datasets/packaged_modules/spark/spark.py#L49-L53", "#self-assign" ]
2025-04-15T11:29:17Z
2025-05-07T14:17:30Z
2025-05-07T14:17:30Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When using `IterableDataset.from_spark()` with a Spark DataFrame containing image data, the `Image` feature class fails to properly process this data type, causing an `AttributeError: 'bytearray' object has no attribute 'get'` ### Steps to reproduce the bug 1. Create a Spark DataFrame with a column containing image data as bytearray objects 2. Define a Feature schema with an Image feature 3. Create an IterableDataset using `IterableDataset.from_spark()` 4. Attempt to iterate through the dataset ``` from pyspark.sql import SparkSession from datasets import Dataset, IterableDataset, Features, Image, Value # initialize spark spark = SparkSession.builder.appName("MinimalRepro").getOrCreate() # create spark dataframe data = [(0, open("image.png", "rb").read())] df = spark.createDataFrame(data, "idx: int, image: binary") # convert to dataset features = Features({"idx": Value("int64"), "image": Image()}) ds = Dataset.from_spark(df, features=features) ds_iter = IterableDataset.from_spark(df, features=features) # iterate print(next(iter(ds))) print(next(iter(ds_iter))) ``` ### Expected behavior The features should work on `IterableDataset` the same way they work on `Dataset` ### Environment info - `datasets` version: 3.5.0 - Platform: macOS-15.3.2-arm64-arm-64bit - Python version: 3.12.7 - `huggingface_hub` version: 0.30.2 - PyArrow version: 18.1.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7517/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7517/timeline
null
completed
null
null
530.803611
187
https://api.github.com/repos/huggingface/datasets/issues/7516
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7516/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7516/comments
https://api.github.com/repos/huggingface/datasets/issues/7516/events
https://github.com/huggingface/datasets/issues/7516
2,995,780,283
I_kwDODunzps6yj_q7
7,516
unsloth/DeepSeek-R1-Distill-Qwen-32B server error
{ "avatar_url": "https://avatars.githubusercontent.com/u/164353862?v=4", "events_url": "https://api.github.com/users/Editor-1/events{/privacy}", "followers_url": "https://api.github.com/users/Editor-1/followers", "following_url": "https://api.github.com/users/Editor-1/following{/other_user}", "gists_url": "https://api.github.com/users/Editor-1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Editor-1", "id": 164353862, "login": "Editor-1", "node_id": "U_kgDOCcvXRg", "organizations_url": "https://api.github.com/users/Editor-1/orgs", "received_events_url": "https://api.github.com/users/Editor-1/received_events", "repos_url": "https://api.github.com/users/Editor-1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Editor-1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Editor-1/subscriptions", "type": "User", "url": "https://api.github.com/users/Editor-1", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2025-04-15T09:26:53Z
2025-04-15T09:57:26Z
2025-04-15T09:57:26Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug hfhubhttperror: 500 server error: internal server error for url: https://huggingface.co/api/models/unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit/commits/main (request id: root=1-67fe23fa-3a2150eb444c2a823c388579;de3aed68-c397-4da5-94d4-6565efd3b919) internal error - we're working hard to fix this as soon as possible! ### Steps to reproduce the bug unsloth/DeepSeek-R1-Distill-Qwen-32B server error ### Expected behavior Network repair ### Environment info The web side is also unavailable
{ "avatar_url": "https://avatars.githubusercontent.com/u/164353862?v=4", "events_url": "https://api.github.com/users/Editor-1/events{/privacy}", "followers_url": "https://api.github.com/users/Editor-1/followers", "following_url": "https://api.github.com/users/Editor-1/following{/other_user}", "gists_url": "https://api.github.com/users/Editor-1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Editor-1", "id": 164353862, "login": "Editor-1", "node_id": "U_kgDOCcvXRg", "organizations_url": "https://api.github.com/users/Editor-1/orgs", "received_events_url": "https://api.github.com/users/Editor-1/received_events", "repos_url": "https://api.github.com/users/Editor-1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Editor-1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Editor-1/subscriptions", "type": "User", "url": "https://api.github.com/users/Editor-1", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7516/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7516/timeline
null
completed
null
null
0.509167
188
https://api.github.com/repos/huggingface/datasets/issues/7515
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7515/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7515/comments
https://api.github.com/repos/huggingface/datasets/issues/7515/events
https://github.com/huggingface/datasets/issues/7515
2,995,082,418
I_kwDODunzps6yhVSy
7,515
`concatenate_datasets` does not preserve Pytorch format for IterableDataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/5140987?v=4", "events_url": "https://api.github.com/users/francescorubbo/events{/privacy}", "followers_url": "https://api.github.com/users/francescorubbo/followers", "following_url": "https://api.github.com/users/francescorubbo/following{/other_user}", "gists_url": "https://api.github.com/users/francescorubbo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/francescorubbo", "id": 5140987, "login": "francescorubbo", "node_id": "MDQ6VXNlcjUxNDA5ODc=", "organizations_url": "https://api.github.com/users/francescorubbo/orgs", "received_events_url": "https://api.github.com/users/francescorubbo/received_events", "repos_url": "https://api.github.com/users/francescorubbo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/francescorubbo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/francescorubbo/subscriptions", "type": "User", "url": "https://api.github.com/users/francescorubbo", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! Oh indeed it would be cool to return the same format in that case. Would you like to submit a PR ? The function that does the concatenation is here:\n\nhttps://github.com/huggingface/datasets/blob/90e5bf8a8599b625d6103ee5ac83b98269991141/src/datasets/iterable_dataset.py#L3375-L3380", "Thank you for the pointer, @lhoestq ! See #7522 " ]
2025-04-15T04:36:34Z
2025-05-19T15:07:38Z
2025-05-19T15:07:38Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When concatenating datasets with `concatenate_datasets`, I would expect the resulting combined dataset to be in the same format as the inputs (assuming it's consistent). This is indeed the behavior when combining `Dataset`, but not when combining `IterableDataset`. Specifically, when applying `concatenate_datasets` to a list of `IterableDataset` in Pytorch format (i.e. using `.with_format(Pytorch)`), the output `IterableDataset` is not in Pytorch format. ### Steps to reproduce the bug ``` import datasets ds = datasets.Dataset.from_dict({"a": [1,2,3]}) iterable_ds = ds.to_iterable_dataset() datasets.concatenate_datasets([ds.with_format("torch")]) # <- this preserves Pytorch format datasets.concatenate_datasets([iterable_ds.with_format("torch")]) # <- this does NOT preserves Pytorch format ``` ### Expected behavior Pytorch format should be preserved when combining IterableDataset in Pytorch format. ### Environment info datasets==3.5.0, Python 3.11.11, torch==2.2.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7515/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7515/timeline
null
completed
null
null
826.517778
189
https://api.github.com/repos/huggingface/datasets/issues/7502
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7502/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7502/comments
https://api.github.com/repos/huggingface/datasets/issues/7502/events
https://github.com/huggingface/datasets/issues/7502
2,977,453,814
I_kwDODunzps6xeFb2
7,502
`load_dataset` of size 40GB creates a cache of >720GB
{ "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pietrolesci", "id": 61748653, "login": "pietrolesci", "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "repos_url": "https://api.github.com/users/pietrolesci/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "type": "User", "url": "https://api.github.com/users/pietrolesci", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! Parquet is a compressed format. When you load a dataset, it uncompresses the Parquet data into Arrow data on your disk. That's why you can indeed end up with 720GB of uncompressed data on disk. The uncompression is needed to enable performant dataset objects (especially for random access).\n\nTo save some storage you can instead load the dataset with `streaming=True`. This way you get an `IterableDataset` that reads the Parquet data iteratively without ever writing to disk.\n\nPS: `ReadInstruction` might not be implemented for `streaming=True`, if it's the case you can use `ds.take()` and `ds.skip()` instead", "Hi @lhoestq, thanks a lot for your answer. This makes perfect sense. I will try using the streaming mode. Closing the issue." ]
2025-04-07T16:52:34Z
2025-04-15T15:22:12Z
2025-04-15T15:22:11Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hi there, I am trying to load a dataset from the Hugging Face Hub and split it into train and validation splits. Somehow, when I try to do it with `load_dataset`, it exhausts my disk quota. So, I tried manually downloading the parquet files from the hub and loading them as follows: ```python ds = DatasetDict( { "train": load_dataset( "parquet", data_dir=f"{local_dir}/{tok}", cache_dir=cache_dir, num_proc=min(12, os.cpu_count()), # type: ignore split=ReadInstruction("train", from_=0, to=NUM_TRAIN, unit="abs"), # type: ignore ), "validation": load_dataset( "parquet", data_dir=f"{local_dir}/{tok}", cache_dir=cache_dir, num_proc=min(12, os.cpu_count()), # type: ignore split=ReadInstruction("train", from_=NUM_TRAIN, unit="abs"), # type: ignore ) } ) ``` which still strangely creates 720GB of cache. In addition, if I remove the raw parquet file folder (`f"{local_dir}/{tok}"` in this example), I am not able to load anything. So, I am left wondering what this cache is doing. Am I missing something? Is there a solution to this problem? Thanks a lot in advance for your help! A related issue: https://github.com/huggingface/transformers/issues/10204#issue-809007443. --- Python: 3.11.11 datasets: 3.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pietrolesci", "id": 61748653, "login": "pietrolesci", "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "repos_url": "https://api.github.com/users/pietrolesci/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "type": "User", "url": "https://api.github.com/users/pietrolesci", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7502/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7502/timeline
null
completed
null
null
190.493611
201
https://api.github.com/repos/huggingface/datasets/issues/7501
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7501/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7501/comments
https://api.github.com/repos/huggingface/datasets/issues/7501/events
https://github.com/huggingface/datasets/issues/7501
2,976,721,014
I_kwDODunzps6xbSh2
7,501
Nested Feature raises ArrowNotImplementedError: Unsupported cast using function cast_struct
{ "avatar_url": "https://avatars.githubusercontent.com/u/26623948?v=4", "events_url": "https://api.github.com/users/yaner-here/events{/privacy}", "followers_url": "https://api.github.com/users/yaner-here/followers", "following_url": "https://api.github.com/users/yaner-here/following{/other_user}", "gists_url": "https://api.github.com/users/yaner-here/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yaner-here", "id": 26623948, "login": "yaner-here", "node_id": "MDQ6VXNlcjI2NjIzOTQ4", "organizations_url": "https://api.github.com/users/yaner-here/orgs", "received_events_url": "https://api.github.com/users/yaner-here/received_events", "repos_url": "https://api.github.com/users/yaner-here/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yaner-here/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yaner-here/subscriptions", "type": "User", "url": "https://api.github.com/users/yaner-here", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Solved by the default `load_dataset(features)` parameters. Do not use `Sequence` for the `list` in `list[any]` json schema, just simply use `[]`. For example, `\"b\": Sequence({...})` fails but `\"b\": [{...}]` works fine." ]
2025-04-07T12:35:39Z
2025-04-07T12:43:04Z
2025-04-07T12:43:03Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug `datasets.Features` seems to be unable to handle json file that contains fields of `list[dict]`. ### Steps to reproduce the bug ```json // test.json {"a": 1, "b": [{"c": 2, "d": 3}, {"c": 4, "d": 5}]} {"a": 5, "b": [{"c": 7, "d": 8}, {"c": 9, "d": 10}]} ``` ```python import json from datasets import Dataset, Features, Value, Sequence, load_dataset annotation_feature = Features({ "a": Value("int32"), "b": Sequence({ "c": Value("int32"), "d": Value("int32"), }), }) annotation_dataset = load_dataset( "json", data_files="test.json", features=annotation_feature ) ``` ``` ArrowNotImplementedError: Unsupported cast from list<item: struct<c: int32, d: int32>> to struct using function cast_struct The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) Cell In[46], line 11 2 from datasets import Dataset, Features, Value, Sequence, load_dataset 4 annotation_feature = Features({ 5 "a": Value("int32"), 6 "b": Sequence({ (...) 9 }), 10 }) ---> 11 annotation_dataset = load_dataset( 12 "json", 13 data_files="test.json", 14 features=annotation_feature 15 ) ``` ### Expected behavior A `datasets.Datasets` instance should be initialized. ### Environment info - `datasets` version: 3.5.0 - Platform: Linux-6.11.0-21-generic-x86_64-with-glibc2.39 - Python version: 3.11.11 - `huggingface_hub` version: 0.30.1 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/26623948?v=4", "events_url": "https://api.github.com/users/yaner-here/events{/privacy}", "followers_url": "https://api.github.com/users/yaner-here/followers", "following_url": "https://api.github.com/users/yaner-here/following{/other_user}", "gists_url": "https://api.github.com/users/yaner-here/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yaner-here", "id": 26623948, "login": "yaner-here", "node_id": "MDQ6VXNlcjI2NjIzOTQ4", "organizations_url": "https://api.github.com/users/yaner-here/orgs", "received_events_url": "https://api.github.com/users/yaner-here/received_events", "repos_url": "https://api.github.com/users/yaner-here/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yaner-here/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yaner-here/subscriptions", "type": "User", "url": "https://api.github.com/users/yaner-here", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7501/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7501/timeline
null
completed
null
null
0.123333
202
https://api.github.com/repos/huggingface/datasets/issues/7494
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7494/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7494/comments
https://api.github.com/repos/huggingface/datasets/issues/7494/events
https://github.com/huggingface/datasets/issues/7494
2,965,347,685
I_kwDODunzps6wv51l
7,494
Broken links in pdf loading documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/75789232?v=4", "events_url": "https://api.github.com/users/VyoJ/events{/privacy}", "followers_url": "https://api.github.com/users/VyoJ/followers", "following_url": "https://api.github.com/users/VyoJ/following{/other_user}", "gists_url": "https://api.github.com/users/VyoJ/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VyoJ", "id": 75789232, "login": "VyoJ", "node_id": "MDQ6VXNlcjc1Nzg5MjMy", "organizations_url": "https://api.github.com/users/VyoJ/orgs", "received_events_url": "https://api.github.com/users/VyoJ/received_events", "repos_url": "https://api.github.com/users/VyoJ/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VyoJ/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VyoJ/subscriptions", "type": "User", "url": "https://api.github.com/users/VyoJ", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "thanks for reporting ! I fixed the links, the docs will be updated in the next release" ]
2025-04-02T06:45:22Z
2025-04-15T13:36:25Z
2025-04-15T13:36:04Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Hi, just a couple of small issues I ran into while reading the docs for [loading pdf data](https://huggingface.co/docs/datasets/main/en/document_load): 1. The link for the [`Create a pdf dataset`](https://huggingface.co/docs/datasets/main/en/document_load#pdffolder) points to https://huggingface.co/docs/datasets/main/en/pdf_dataset instead of https://huggingface.co/docs/datasets/main/en/document_dataset and hence gives a 404 error. 2. At the top of the page, it's mentioned that to work with pdf datasets we need to have the `pdfplumber` package installed but the link to its installation guide points to `pytorch/vision` [installation instructions](https://github.com/pytorch/vision#installation) instead of `pdfplumber`'s [guide](https://github.com/jsvine/pdfplumber#installation) I love the work on enabling pdf dataset support and these small tweaks would help everyone navigate the docs better. Thanks! ### Steps to reproduce the bug The issue is on the [Load Document Data](https://huggingface.co/docs/datasets/main/en/document_load) page of the datasets docs. ### Expected behavior 1. For solving the first issue, I went through the [source .mdx code](https://github.com/huggingface/datasets/blob/main/docs/source/document_load.mdx?plain=1#L188) of the datasets docs and found that the link is pointing to `./pdf_dataset` instead of `./document_dataset` 2. For the second issue, I went through the [source .mdx code](https://github.com/huggingface/datasets/blob/main/docs/source/document_load.mdx?plain=1#L13) of the datasets docs and found that the link is `pytorch/vision` [installation instructions](https://github.com/pytorch/vision#installation) instead of `pdfplumber`'s [guide](https://github.com/jsvine/pdfplumber#installation) Just replacing these two links should fix the bugs ### Environment info datasets v3.5.0 (main at the time of writing)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7494/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7494/timeline
null
completed
null
null
318.845
209
https://api.github.com/repos/huggingface/datasets/issues/7486
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7486/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7486/comments
https://api.github.com/repos/huggingface/datasets/issues/7486/events
https://github.com/huggingface/datasets/issues/7486
2,954,042,179
I_kwDODunzps6wExtD
7,486
`shared_datadir` fixture is missing
{ "avatar_url": "https://avatars.githubusercontent.com/u/1289205?v=4", "events_url": "https://api.github.com/users/lahwaacz/events{/privacy}", "followers_url": "https://api.github.com/users/lahwaacz/followers", "following_url": "https://api.github.com/users/lahwaacz/following{/other_user}", "gists_url": "https://api.github.com/users/lahwaacz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lahwaacz", "id": 1289205, "login": "lahwaacz", "node_id": "MDQ6VXNlcjEyODkyMDU=", "organizations_url": "https://api.github.com/users/lahwaacz/orgs", "received_events_url": "https://api.github.com/users/lahwaacz/received_events", "repos_url": "https://api.github.com/users/lahwaacz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lahwaacz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lahwaacz/subscriptions", "type": "User", "url": "https://api.github.com/users/lahwaacz", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "OK I was missing the `pytest-datadir` package. Sorry for the noise!" ]
2025-03-27T18:17:12Z
2025-03-27T19:49:11Z
2025-03-27T19:49:10Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Running the tests for the latest release fails due to missing `shared_datadir` fixture. ### Steps to reproduce the bug Running `pytest` while building a package for Arch Linux leads to these errors: ``` ==================================== ERRORS ==================================== _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>1] _________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>2] _________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>3] _________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>4] _________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>5] _________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>6] _________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 _______________ ERROR at setup of test_dataset_with_pdf_feature ________________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 34 @require_pdfplumber def test_dataset_with_pdf_feature(shared_datadir): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:34 _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>0] _________ [gw46] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 ``` ### Expected behavior All fixtures used in tests should be available. ### Environment info Arch Linux build system, building the [python-datasets](https://gitlab.archlinux.org/archlinux/packaging/packages/python-datasets) package. There are actually [many deselected tests](https://gitlab.archlinux.org/archlinux/packaging/packages/python-datasets/-/blob/6f97957f0c326cc7b3da6b7f12326305bcaef374/PKGBUILD#L66-148) which were failing on previous releases, but these errors popped up in 3.5.0.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1289205?v=4", "events_url": "https://api.github.com/users/lahwaacz/events{/privacy}", "followers_url": "https://api.github.com/users/lahwaacz/followers", "following_url": "https://api.github.com/users/lahwaacz/following{/other_user}", "gists_url": "https://api.github.com/users/lahwaacz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lahwaacz", "id": 1289205, "login": "lahwaacz", "node_id": "MDQ6VXNlcjEyODkyMDU=", "organizations_url": "https://api.github.com/users/lahwaacz/orgs", "received_events_url": "https://api.github.com/users/lahwaacz/received_events", "repos_url": "https://api.github.com/users/lahwaacz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lahwaacz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lahwaacz/subscriptions", "type": "User", "url": "https://api.github.com/users/lahwaacz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7486/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7486/timeline
null
completed
null
null
1.532778
217
https://api.github.com/repos/huggingface/datasets/issues/7481
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7481/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7481/comments
https://api.github.com/repos/huggingface/datasets/issues/7481/events
https://github.com/huggingface/datasets/issues/7481
2,950,692,971
I_kwDODunzps6v4ABr
7,481
deal with python `10_000` legal number in slice syntax
{ "avatar_url": "https://avatars.githubusercontent.com/u/196988264?v=4", "events_url": "https://api.github.com/users/sfc-gh-sbekman/events{/privacy}", "followers_url": "https://api.github.com/users/sfc-gh-sbekman/followers", "following_url": "https://api.github.com/users/sfc-gh-sbekman/following{/other_user}", "gists_url": "https://api.github.com/users/sfc-gh-sbekman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sfc-gh-sbekman", "id": 196988264, "login": "sfc-gh-sbekman", "node_id": "U_kgDOC73NaA", "organizations_url": "https://api.github.com/users/sfc-gh-sbekman/orgs", "received_events_url": "https://api.github.com/users/sfc-gh-sbekman/received_events", "repos_url": "https://api.github.com/users/sfc-gh-sbekman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sfc-gh-sbekman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sfc-gh-sbekman/subscriptions", "type": "User", "url": "https://api.github.com/users/sfc-gh-sbekman", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "should be an easy fix, I opened a PR" ]
2025-03-26T20:10:54Z
2025-03-28T16:20:44Z
2025-03-28T16:20:44Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request ``` In [6]: ds = datasets.load_dataset("HuggingFaceH4/ultrachat_200k", split="train_sft[:1000]") In [7]: ds = datasets.load_dataset("HuggingFaceH4/ultrachat_200k", split="train_sft[:1_000]") [dozens of frames skipped] File /usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py:444, in _str_to_read_instruction(spec) 442 res = _SUB_SPEC_RE.match(spec) 443 if not res: --> 444 raise ValueError(f"Unrecognized instruction format: {spec}") ValueError: Unrecognized instruction format: train_sft[:1_000] ``` It took me a while to understand what the problem was. But apparently `pyarrow` doesn't allow python numbers that may include `_` as in `1_000`. The `_` aids readability since `10_000_000` vs `10000000` is obviously easier to grasp of what the actual number is. Feature request: ideally `datasets` being a python module will do the right thing and convert python numbers into whatever pyarrow supports - in this case stripping `_`s. Second best it'd err and tell the user that using numbers with `_` in split slices is not acceptible, so that the user won't have to deal with a huge pyarrow assert they know nothing about. Thank you!
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7481/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7481/timeline
null
completed
null
null
44.163889
222
https://api.github.com/repos/huggingface/datasets/issues/7475
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7475/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7475/comments
https://api.github.com/repos/huggingface/datasets/issues/7475/events
https://github.com/huggingface/datasets/issues/7475
2,946,640,570
I_kwDODunzps6voiq6
7,475
IterableDataset's state_dict shard_example_idx is always equal to the number of samples in a shard
{ "avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4", "events_url": "https://api.github.com/users/bruno-hays/events{/privacy}", "followers_url": "https://api.github.com/users/bruno-hays/followers", "following_url": "https://api.github.com/users/bruno-hays/following{/other_user}", "gists_url": "https://api.github.com/users/bruno-hays/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bruno-hays", "id": 48770768, "login": "bruno-hays", "node_id": "MDQ6VXNlcjQ4NzcwNzY4", "organizations_url": "https://api.github.com/users/bruno-hays/orgs", "received_events_url": "https://api.github.com/users/bruno-hays/received_events", "repos_url": "https://api.github.com/users/bruno-hays/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bruno-hays/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bruno-hays/subscriptions", "type": "User", "url": "https://api.github.com/users/bruno-hays", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/129883215?v=4", "events_url": "https://api.github.com/users/Harry-Yang0518/events{/privacy}", "followers_url": "https://api.github.com/users/Harry-Yang0518/followers", "following_url": "https://api.github.com/users/Harry-Yang0518/following{/other_user}", "gists_url": "https://api.github.com/users/Harry-Yang0518/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Harry-Yang0518", "id": 129883215, "login": "Harry-Yang0518", "node_id": "U_kgDOB73cTw", "organizations_url": "https://api.github.com/users/Harry-Yang0518/orgs", "received_events_url": "https://api.github.com/users/Harry-Yang0518/received_events", "repos_url": "https://api.github.com/users/Harry-Yang0518/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Harry-Yang0518/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Harry-Yang0518/subscriptions", "type": "User", "url": "https://api.github.com/users/Harry-Yang0518", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/129883215?v=4", "events_url": "https://api.github.com/users/Harry-Yang0518/events{/privacy}", "followers_url": "https://api.github.com/users/Harry-Yang0518/followers", "following_url": "https://api.github.com/users/Harry-Yang0518/following{/other_user}", "gists_url": "https://api.github.com/users/Harry-Yang0518/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Harry-Yang0518", "id": 129883215, "login": "Harry-Yang0518", "node_id": "U_kgDOB73cTw", "organizations_url": "https://api.github.com/users/Harry-Yang0518/orgs", "received_events_url": "https://api.github.com/users/Harry-Yang0518/received_events", "repos_url": "https://api.github.com/users/Harry-Yang0518/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Harry-Yang0518/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Harry-Yang0518/subscriptions", "type": "User", "url": "https://api.github.com/users/Harry-Yang0518", "user_view_type": "public" } ]
null
[ "Hey, I’d love to work on this issue but I am a beginner, can I work it with you?", "Hello. I'm sorry but I don't have much time to get in the details for now.\nHave you managed to reproduce the issue with the code provided ?\nIf you want to work on it, you can self-assign and ask @lhoestq for directions", "Hi Bruno, I am trying to reproduce it this later in this week and let you know what I found.", "#self-assign", "Good catch, I tried and if the dataset is bigger (e.g. `range(9999)`) it returns `\"shard_example_idx\": 1000` with is the `config.DEFAULT_MAX_BATCH_SIZE`\n\nhttps://github.com/huggingface/datasets/blob/94ccd1b4fada8a92cea96dc8df4e915041d695b6/src/datasets/arrow_dataset.py#L5313-L5317\n\nIt looks like the state_dict is incorrect in that case, it should account for this and use the `RebatchedArrowExamplesIterable` which buffers the batch of 1000 rows and counts the iteration within the batch in the state_dict", "\nHello @lhoestq,\n\nI’ve been debugging the `IterableDataset.state_dict()` behavior and applied a patch to `ArrowExamplesIterable._iter_arrow()` in an attempt to fix the issue described in #7475—specifically, that `shard_example_idx` always equals the number of samples in the shard, even if only a few examples have been consumed.\n\n### What I Tried\n\nI updated `_iter_arrow` to slice off already-consumed rows and increment the state only by the number of actual examples yielded, like this:\n\n```python\nclass ArrowExamplesIterable(_BaseExamplesIterable):\n # ... __init__ and _init_state_dict as before ...\n\n def _iter_arrow(self):\n shard_idx_start = self._state_dict[\"shard_idx\"] if self._state_dict else 0\n\n for gen_kwargs in islice(\n _split_gen_kwargs(self.kwargs, max_num_jobs=self.num_shards),\n shard_idx_start, None\n ):\n shard_example_idx_start = self._state_dict[\"shard_example_idx\"] if self._state_dict else 0\n shard_example_idx = 0\n\n for key, pa_table in self.generate_tables_fn(**gen_kwargs):\n num_rows = len(pa_table)\n next_idx = shard_example_idx + num_rows\n\n if next_idx <= shard_example_idx_start:\n shard_example_idx = next_idx\n continue\n\n offset = max(0, shard_example_idx_start - shard_example_idx)\n sliced_table = pa_table.slice(offset)\n\n if self._state_dict:\n self._state_dict[\"shard_example_idx\"] += len(sliced_table)\n\n yield key, sliced_table\n shard_example_idx = next_idx\n\n if self._state_dict:\n self._state_dict[\"shard_idx\"] += 1\n self._state_dict[\"shard_example_idx\"] = 0\n```\n\nI verified that the updated code was being used, and I added debug prints to confirm the table slicing and counter updates.\n\n### The Issue Still Exists\n\nDespite the changes, the behavior remains the same. Running this minimal repro:\n\n```python\nds = Dataset.from_dict({\"a\": range(6)}).to_iterable_dataset(num_shards=1)\nfor idx, example in enumerate(ds):\n print(example)\n if idx == 2:\n print(\"checkpoint\")\n print(ds.state_dict())\n break\n```\n\nStill outputs:\n\n```bash\n{'a': 0}\n{'a': 1}\n{'a': 2}\ncheckpoint\n{'examples_iterable': {'shard_idx': 0, 'shard_example_idx': 6, 'type': 'ArrowExamplesIterable'}, 'epoch': 0}\n```\n\nEven though only 3 examples were consumed, `shard_example_idx` jumps to 6.\n\n### Questions\n\n- Could there be another place (e.g., in `__iter__`, `RebatchedArrowExamplesIterable`, or the `IterableDataset` wrapper) that's still using the old logic and overriding the state?\n- Is there a better location to intercept and count yielded examples?\n- Would you recommend tracking a new `true_example_idx` to avoid modifying existing behavior?\n\nLet me know your thoughts—happy to iterate further and submit a PR once we align on the right approach. Thanks again for your help and feedback!", "I found a fix using RebatchedArrowExamplesIterable, let me know if it's all good for you now", "Hi @lhoestq, thanks for the quick fix and for referencing RebatchedArrowExamplesIterable! 🙌\n\nI just tested your patch locally and can confirm that shard_example_idx is now tracking correctly when only a subset of examples is consumed. This resolves the issue I was seeing in #7475.\n\nReally appreciate the guidance earlier on where to look—it was a great learning opportunity. If there are other parts of the IterableDataset internals that could use cleanup or testing, I’d be happy to help." ]
2025-03-25T13:58:07Z
2025-05-06T14:22:19Z
2025-05-06T14:05:07Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug I've noticed a strange behaviour with Iterable state_dict: the value of shard_example_idx is always equal to the amount of samples in a shard. ### Steps to reproduce the bug I am reusing the example from the doc ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=1) state_dict = None # Iterate through the dataset and print examples for idx, example in enumerate(ds): print(example) if idx == 2: state_dict = ds.state_dict() print("checkpoint") break print(state_dict) ``` Returns: ``` {'a': 0} {'a': 1} checkpoint {'examples_iterable': {'shard_idx': 0, 'shard_example_idx': 6, 'type': 'ArrowExamplesIterable'}, 'epoch': 0} ``` ### Expected behavior shard_example_idx should be 2 instead of 6 If we run with num_shards=2, then shard_example_idx is 3 instead of 2 and so on. ### Environment info - `datasets` version: 3.4.1 - Platform: macOS-14.6.1-arm64-arm-64bit - Python version: 3.12.9 - `huggingface_hub` version: 0.29.3 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7475/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7475/timeline
null
completed
null
null
1,008.116667
228
https://api.github.com/repos/huggingface/datasets/issues/7473
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7473/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7473/comments
https://api.github.com/repos/huggingface/datasets/issues/7473/events
https://github.com/huggingface/datasets/issues/7473
2,939,034,643
I_kwDODunzps6vLhwT
7,473
Webdataset data format problem
{ "avatar_url": "https://avatars.githubusercontent.com/u/1017189?v=4", "events_url": "https://api.github.com/users/edmcman/events{/privacy}", "followers_url": "https://api.github.com/users/edmcman/followers", "following_url": "https://api.github.com/users/edmcman/following{/other_user}", "gists_url": "https://api.github.com/users/edmcman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/edmcman", "id": 1017189, "login": "edmcman", "node_id": "MDQ6VXNlcjEwMTcxODk=", "organizations_url": "https://api.github.com/users/edmcman/orgs", "received_events_url": "https://api.github.com/users/edmcman/received_events", "repos_url": "https://api.github.com/users/edmcman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/edmcman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/edmcman/subscriptions", "type": "User", "url": "https://api.github.com/users/edmcman", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "I was able to work around it" ]
2025-03-21T17:23:52Z
2025-03-21T19:19:58Z
2025-03-21T19:19:58Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Please see https://huggingface.co/datasets/ejschwartz/idioms/discussions/1 Error code: FileFormatMismatchBetweenSplitsError All three splits, train, test, and validation, use webdataset. But only the train split has more than one file. How can I force the other two splits to also be interpreted as being the webdataset format? (I don't think there is currently a way, but happy to be told that I am wrong.) ### Steps to reproduce the bug ``` import datasets datasets.load_dataset("ejschwartz/idioms") ### Expected behavior The dataset loads. Alternatively, there is a YAML syntax for manually specifying the format. ### Environment info - `datasets` version: 3.2.0 - Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.28.1 - PyArrow version: 19.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/1017189?v=4", "events_url": "https://api.github.com/users/edmcman/events{/privacy}", "followers_url": "https://api.github.com/users/edmcman/followers", "following_url": "https://api.github.com/users/edmcman/following{/other_user}", "gists_url": "https://api.github.com/users/edmcman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/edmcman", "id": 1017189, "login": "edmcman", "node_id": "MDQ6VXNlcjEwMTcxODk=", "organizations_url": "https://api.github.com/users/edmcman/orgs", "received_events_url": "https://api.github.com/users/edmcman/received_events", "repos_url": "https://api.github.com/users/edmcman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/edmcman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/edmcman/subscriptions", "type": "User", "url": "https://api.github.com/users/edmcman", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7473/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7473/timeline
null
completed
null
null
1.935
230
https://api.github.com/repos/huggingface/datasets/issues/7472
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7472/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7472/comments
https://api.github.com/repos/huggingface/datasets/issues/7472/events
https://github.com/huggingface/datasets/issues/7472
2,937,607,272
I_kwDODunzps6vGFRo
7,472
Label casting during `map` process is canceled after the `map` process
{ "avatar_url": "https://avatars.githubusercontent.com/u/11156001?v=4", "events_url": "https://api.github.com/users/yoshitomo-matsubara/events{/privacy}", "followers_url": "https://api.github.com/users/yoshitomo-matsubara/followers", "following_url": "https://api.github.com/users/yoshitomo-matsubara/following{/other_user}", "gists_url": "https://api.github.com/users/yoshitomo-matsubara/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yoshitomo-matsubara", "id": 11156001, "login": "yoshitomo-matsubara", "node_id": "MDQ6VXNlcjExMTU2MDAx", "organizations_url": "https://api.github.com/users/yoshitomo-matsubara/orgs", "received_events_url": "https://api.github.com/users/yoshitomo-matsubara/received_events", "repos_url": "https://api.github.com/users/yoshitomo-matsubara/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yoshitomo-matsubara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yoshitomo-matsubara/subscriptions", "type": "User", "url": "https://api.github.com/users/yoshitomo-matsubara", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! By default `map()` tries to keep the types of each column of the dataset, so here it reuses the int type since all your float values can be converted to integers. But I agree it would be nice to store float values as float values and don't try to reuse the same type in this case.\n\nIn the meantime, you can either store the float values in a new column, or pass the output `features=` manually to `map()`", "Hi @lhoestq \n\nThank you for the answer & suggestion!\n\nCan we add some flag to `map()` function like `reuses_original_type=True` and skip reusing the original type when it's False?\n\nLet me know if it sounds like a reasonable solution. I am happy to submit a PR for this.", "In general we try to avoid adding new parameters when it's already possible to achieve the same results with existing parameters (here `features=`). But since it's not always convenient to know in advance the `features=` I'm open to contributions to adding this parameter yes", "Thank you for sharing the context. Good to know that. \n\nI submitted a PR #7483. Could you review the PR?", "Hi @lhoestq \n\nLet me know if there is something that I should add to [the PR](https://github.com/huggingface/datasets/pull/7483)!", "Closing this issue as the PR #7483 was merged" ]
2025-03-21T07:56:22Z
2025-04-10T05:11:15Z
2025-04-10T05:11:14Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When preprocessing a multi-label dataset, I introduced a step to convert int labels to float labels as [BCEWithLogitsLoss](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html) expects float labels and forward function of models in transformers package internally use `BCEWithLogitsLoss` However, the casting was canceled after `.map` process and the label values still use int values, which leads to an error ``` File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/transformers/models/bert/modeling_bert.py", line 1711, in forward loss = loss_fct(logits, labels) File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/modules/loss.py", line 819, in forward return F.binary_cross_entropy_with_logits( File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/functional.py", line 3628, in binary_cross_entropy_with_logits return torch.binary_cross_entropy_with_logits( RuntimeError: result type Float can't be cast to the desired output type Long ``` This seems like happening only when the original labels are int values (see examples below) ### Steps to reproduce the bug If the original dataset uses a list of int labels, it will cancel the int->float casting ```python from datasets import Dataset data = { 'text': ['text1', 'text2', 'text3', 'text4'], 'labels': [[0, 1, 2], [3], [3, 4], [3]] } dataset = Dataset.from_dict(data) label_set = set([label for labels in data['labels'] for label in labels]) label2idx = {label: idx for idx, label in enumerate(sorted(label_set))} def multi_labels_to_ids(labels): ids = [0.0] * len(label2idx) for label in labels: ids[label2idx[label]] = 1.0 return ids def preprocess(examples): result = {'sentence': [[0, 3, 4] for _ in range(len(examples['labels']))]} print('"labels" are int', examples['labels']) result['labels'] = [multi_labels_to_ids(l) for l in examples['labels']] print('"labels" were converted to multi-label format with float values', result['labels']) return result preprocessed_dataset = dataset.map(preprocess, batched=True, remove_columns=['labels', 'text']) print(preprocessed_dataset[0]['labels']) # Output: "[1, 1, 1, 0, 0]" # Expected: "[1.0, 1.0, 1.0, 0.0, 0.0]" ``` If the original dataset uses non-int labels, it works as expected. ```python from datasets import Dataset data = { 'text': ['text1', 'text2', 'text3', 'text4'], 'labels': [['label1', 'label2', 'label3'], ['label4'], ['label4', 'label5'], ['label4']] } dataset = Dataset.from_dict(data) label_set = set([label for labels in data['labels'] for label in labels]) label2idx = {label: idx for idx, label in enumerate(sorted(label_set))} def multi_labels_to_ids(labels): ids = [0.0] * len(label2idx) for label in labels: ids[label2idx[label]] = 1.0 return ids def preprocess(examples): result = {'sentence': [[0, 3, 4] for _ in range(len(examples['labels']))]} print('"labels" are int', examples['labels']) result['labels'] = [multi_labels_to_ids(l) for l in examples['labels']] print('"labels" were converted to multi-label format with float values', result['labels']) return result preprocessed_dataset = dataset.map(preprocess, batched=True, remove_columns=['labels', 'text']) print(preprocessed_dataset[0]['labels']) # Output: "[1.0, 1.0, 1.0, 0.0, 0.0]" # Expected: "[1.0, 1.0, 1.0, 0.0, 0.0]" ``` Note that the only difference between these two examples is > 'labels': [[0, 1, 2], [3], [3, 4], [3]] v.s > 'labels': [['label1', 'label2', 'label3'], ['label4'], ['label4', 'label5'], ['label4']] ### Expected behavior Even if the original dataset uses a list of int labels, the int->float casting during `.map` process should not be canceled as shown in the above example ### Environment info OS Ubuntu 22.04 LTS Python 3.10.11 datasets v3.4.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/11156001?v=4", "events_url": "https://api.github.com/users/yoshitomo-matsubara/events{/privacy}", "followers_url": "https://api.github.com/users/yoshitomo-matsubara/followers", "following_url": "https://api.github.com/users/yoshitomo-matsubara/following{/other_user}", "gists_url": "https://api.github.com/users/yoshitomo-matsubara/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yoshitomo-matsubara", "id": 11156001, "login": "yoshitomo-matsubara", "node_id": "MDQ6VXNlcjExMTU2MDAx", "organizations_url": "https://api.github.com/users/yoshitomo-matsubara/orgs", "received_events_url": "https://api.github.com/users/yoshitomo-matsubara/received_events", "repos_url": "https://api.github.com/users/yoshitomo-matsubara/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yoshitomo-matsubara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yoshitomo-matsubara/subscriptions", "type": "User", "url": "https://api.github.com/users/yoshitomo-matsubara", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7472/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7472/timeline
null
completed
null
null
477.247778
231
https://api.github.com/repos/huggingface/datasets/issues/7471
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7471/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7471/comments
https://api.github.com/repos/huggingface/datasets/issues/7471/events
https://github.com/huggingface/datasets/issues/7471
2,937,530,069
I_kwDODunzps6vFybV
7,471
Adding argument to `_get_data_files_patterns`
{ "avatar_url": "https://avatars.githubusercontent.com/u/34004152?v=4", "events_url": "https://api.github.com/users/SangbumChoi/events{/privacy}", "followers_url": "https://api.github.com/users/SangbumChoi/followers", "following_url": "https://api.github.com/users/SangbumChoi/following{/other_user}", "gists_url": "https://api.github.com/users/SangbumChoi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SangbumChoi", "id": 34004152, "login": "SangbumChoi", "node_id": "MDQ6VXNlcjM0MDA0MTUy", "organizations_url": "https://api.github.com/users/SangbumChoi/orgs", "received_events_url": "https://api.github.com/users/SangbumChoi/received_events", "repos_url": "https://api.github.com/users/SangbumChoi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SangbumChoi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SangbumChoi/subscriptions", "type": "User", "url": "https://api.github.com/users/SangbumChoi", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi ! The pattern can be specified in advance in YAML in the README.md of the dataset :)\n\nFor example\n\n```\n---\nconfigs:\n- config_name: default\n data_files:\n - split: train\n path: \"train/*\"\n - split: test\n path: \"test/*\"\n---\n```\n\nSee the docs at https://huggingface.co/docs/hub/en/datasets-manual-configuration", "@lhoestq How can we choose in this case ? https://huggingface.co/datasets/datasets-examples/doc-image-5\n", "choose what ? sorry I didn't get it ^^'" ]
2025-03-21T07:17:53Z
2025-03-27T12:30:52Z
2025-03-26T07:26:27Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request How about adding if the user already know about the pattern? https://github.com/huggingface/datasets/blob/a256b85cbc67aa3f0e75d32d6586afc507cf535b/src/datasets/data_files.py#L252 ### Motivation While using this load_dataset people might use 10M of images for the local files. However, due to searching all the appropriate file pattern in fsspec, purely searching this pattern takes more than 10 hours (real use-case). ### Your contribution Yeah I can make this happen if this seems valid. @lhoestq WDYT? such like ``` def _get_data_files_patterns(pattern_resolver: Callable[[str], list[str]], patterns: PATTERNS) -> dict[str, list[str]]: ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/34004152?v=4", "events_url": "https://api.github.com/users/SangbumChoi/events{/privacy}", "followers_url": "https://api.github.com/users/SangbumChoi/followers", "following_url": "https://api.github.com/users/SangbumChoi/following{/other_user}", "gists_url": "https://api.github.com/users/SangbumChoi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SangbumChoi", "id": 34004152, "login": "SangbumChoi", "node_id": "MDQ6VXNlcjM0MDA0MTUy", "organizations_url": "https://api.github.com/users/SangbumChoi/orgs", "received_events_url": "https://api.github.com/users/SangbumChoi/received_events", "repos_url": "https://api.github.com/users/SangbumChoi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SangbumChoi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SangbumChoi/subscriptions", "type": "User", "url": "https://api.github.com/users/SangbumChoi", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7471/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7471/timeline
null
completed
null
null
120.142778
232
https://api.github.com/repos/huggingface/datasets/issues/7470
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7470/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7470/comments
https://api.github.com/repos/huggingface/datasets/issues/7470/events
https://github.com/huggingface/datasets/issues/7470
2,937,236,323
I_kwDODunzps6vEqtj
7,470
Is it possible to shard a single-sharded IterableDataset?
{ "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanasdf", "id": 511073, "login": "jonathanasdf", "node_id": "MDQ6VXNlcjUxMTA3Mw==", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanasdf", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! Maybe you can look for an option in your dataset to partition your data based on a deterministic filter ? For example each worker could stream the data based on `row.id % num_shards` or something like that ?", "So the recommendation is to start out with multiple shards initially and re-sharding after is not expected to work? :(\n\nWould something like the following work? Some DiskCachingIterableDataset, where worker 0 streams from the datasource, but also writes to disk, and all of the other workers read from what worker 0 wrote? Then that would produce a stream with a deterministic order and we can subsample.", "To be honest it would be cool to support native multiprocessing in `IterableDataset.map` so you can parallelize any specific processing step without having to rely on a torch Dataloader. What do you think ?\n\nrelated: https://github.com/huggingface/datasets/issues/7193 https://github.com/huggingface/datasets/issues/3444 \noriginal issue: https://github.com/huggingface/datasets/issues/2642\n\nAlternatively the DiskCachingIterableDataset idea works, just note that to make it work with a torch Dataloader with num_workers>0 you'll need:\n1. to make your own `torch.utils.data.IterableDataset` and have rank=0 stream the data and share them with the other workers (either via disk as suggested or IPC)\n2. take into account that`datasets.IterableDataset` will yield 0 examples for ranks with id>0 if there is only one shard, but in your case it's ok since you'd only stream from rank=0", "Ohh that would be pretty cool!\n\nThanks for the suggestions, as there's no actionable items for this repo I'm going to close this issue now.", "Another usecase for this resharding:\n\nIf we have a bunch of jsonl files, and we load it as an IterableDataset with multiple dataloader workers, each file gets naively assigned to a worker.\n\nIf the files were not carefully produced to be equally sized, eg if the very last file is significantly shorter, containing just a few examples, and it gets assigned onto a dataloader worker by itself, then the examples in that file will be significantly oversampled.\n\nIt would be nice if datasets had an internal way to rebalance this without requiring offline reprocessing of the data files" ]
2025-03-21T04:33:37Z
2025-05-09T22:51:46Z
2025-03-26T06:49:28Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
I thought https://github.com/huggingface/datasets/pull/7252 might be applicable but looking at it maybe not. Say we have a process, eg. a database query, that can return data in slightly different order each time. So, the initial query needs to be run by a single thread (not to mention running multiple times incurs more cost too). But the results are also big enough that we don't want to materialize it entirely and instead stream it with an IterableDataset. But after we have the results we want to split it up across workers to parallelize processing. Is something like this possible to do? Here's a failed attempt. The end result should be that each of the shards has unique data, but unfortunately with this attempt the generator gets run once in each shard and the results end up with duplicates... ``` import random import datasets def gen(): print('RUNNING GENERATOR!') items = list(range(10)) random.shuffle(items) yield from items ds = datasets.IterableDataset.from_generator(gen) print('dataset contents:') for item in ds: print(item) print() print('dataset contents (2):') for item in ds: print(item) print() num_shards = 3 def sharded(shard_id): for i, example in enumerate(ds): if i % num_shards in shard_id: yield example ds1 = datasets.IterableDataset.from_generator( sharded, gen_kwargs={'shard_id': list(range(num_shards))} ) for shard in range(num_shards): print('shard', shard) for item in ds1.shard(num_shards, shard): print(item) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanasdf", "id": 511073, "login": "jonathanasdf", "node_id": "MDQ6VXNlcjUxMTA3Mw==", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanasdf", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7470/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7470/timeline
null
completed
null
null
122.264167
233
https://api.github.com/repos/huggingface/datasets/issues/7469
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7469/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7469/comments
https://api.github.com/repos/huggingface/datasets/issues/7469/events
https://github.com/huggingface/datasets/issues/7469
2,936,606,080
I_kwDODunzps6vCQ2A
7,469
Custom split name with the web interface
{ "avatar_url": "https://avatars.githubusercontent.com/u/15141326?v=4", "events_url": "https://api.github.com/users/vince62s/events{/privacy}", "followers_url": "https://api.github.com/users/vince62s/followers", "following_url": "https://api.github.com/users/vince62s/following{/other_user}", "gists_url": "https://api.github.com/users/vince62s/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vince62s", "id": 15141326, "login": "vince62s", "node_id": "MDQ6VXNlcjE1MTQxMzI2", "organizations_url": "https://api.github.com/users/vince62s/orgs", "received_events_url": "https://api.github.com/users/vince62s/received_events", "repos_url": "https://api.github.com/users/vince62s/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vince62s/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vince62s/subscriptions", "type": "User", "url": "https://api.github.com/users/vince62s", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2025-03-20T20:45:59Z
2025-03-21T07:20:37Z
2025-03-21T07:20:37Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug According the doc here: https://huggingface.co/docs/hub/datasets-file-names-and-splits#custom-split-name it should infer the split name from the subdir of data or the beg of the name of the files in data. When doing this manually through web upload it does not work. it uses "train" as a unique split. example: https://huggingface.co/datasets/eole-nlp/estimator_chatml ### Steps to reproduce the bug follow the link above ### Expected behavior there should be two splits "mlqe" and "1720_da" ### Environment info website
{ "avatar_url": "https://avatars.githubusercontent.com/u/15141326?v=4", "events_url": "https://api.github.com/users/vince62s/events{/privacy}", "followers_url": "https://api.github.com/users/vince62s/followers", "following_url": "https://api.github.com/users/vince62s/following{/other_user}", "gists_url": "https://api.github.com/users/vince62s/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vince62s", "id": 15141326, "login": "vince62s", "node_id": "MDQ6VXNlcjE1MTQxMzI2", "organizations_url": "https://api.github.com/users/vince62s/orgs", "received_events_url": "https://api.github.com/users/vince62s/received_events", "repos_url": "https://api.github.com/users/vince62s/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vince62s/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vince62s/subscriptions", "type": "User", "url": "https://api.github.com/users/vince62s", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7469/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7469/timeline
null
completed
null
null
10.577222
234
https://api.github.com/repos/huggingface/datasets/issues/7461
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7461/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7461/comments
https://api.github.com/repos/huggingface/datasets/issues/7461/events
https://github.com/huggingface/datasets/issues/7461
2,925,608,123
I_kwDODunzps6uYTy7
7,461
List of images behave differently on IterableDataset and Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1288009?v=4", "events_url": "https://api.github.com/users/FredrikNoren/events{/privacy}", "followers_url": "https://api.github.com/users/FredrikNoren/followers", "following_url": "https://api.github.com/users/FredrikNoren/following{/other_user}", "gists_url": "https://api.github.com/users/FredrikNoren/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FredrikNoren", "id": 1288009, "login": "FredrikNoren", "node_id": "MDQ6VXNlcjEyODgwMDk=", "organizations_url": "https://api.github.com/users/FredrikNoren/orgs", "received_events_url": "https://api.github.com/users/FredrikNoren/received_events", "repos_url": "https://api.github.com/users/FredrikNoren/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FredrikNoren/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FredrikNoren/subscriptions", "type": "User", "url": "https://api.github.com/users/FredrikNoren", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! Can you try with `datasets` ^3.4 released recently ? on my side it works with IterableDataset on the recent version :)\n\n```python\nIn [20]: def train_iterable_gen():\n ...: images = np.array(load_image(\"https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg\").resize((128, 128)))\n ...: yield {\n ...: \"images\": np.expand_dims(images, axis=0),\n ...: \"messages\": [\n ...: {\n ...: \"role\": \"user\",\n ...: \"content\": [{\"type\": \"image\", \"url\": \"https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg\" }]\n ...: },\n ...: {\n ...: \"role\": \"assistant\",\n ...: \"content\": [{\"type\": \"text\", \"text\": \"duck\" }]\n ...: }\n ...: ]\n ...: }\n ...: \n ...: train_ds = IterableDataset.from_generator(train_iterable_gen,\n ...: features=Features({\n ...: 'images': [datasets.Image(mode=None, decode=True, id=None)],\n ...: 'messages': [{'content': [{'text': datasets.Value(dtype='string', id=None), 'type': datasets.Value(dtype='string', id=None) }],\n ...: 'role': datasets.Value(dtype='string', id=None)}]\n ...: } )\n ...: )\n\n\nIn [21]: \n\nIn [21]: next(iter(train_ds))\n/Users/quentinlhoest/hf/datasets/src/datasets/features/image.py:338: UserWarning: Downcasting array dtype int64 to uint8 to be compatible with 'Pillow'\n warnings.warn(f\"Downcasting array dtype {dtype} to {dest_dtype} to be compatible with 'Pillow'\")\nOut[21]: \n{'images': [<PIL.PngImagePlugin.PngImageFile image mode=RGB size=128x128>],\n 'messages': [{'content': [{'text': None, 'type': 'image'}], 'role': 'user'},\n {'content': [{'type': 'text', 'text': 'duck'}], 'role': 'assistant'}]}\n```", "Hm I tried it here and it works as expected, even on datasets 3.3.2. I guess maybe something in the SFTTrainer is doing additional processing on the dataset, I'll have a look there.\n\nThanks @lhoestq!" ]
2025-03-17T15:59:23Z
2025-03-18T08:57:17Z
2025-03-18T08:57:16Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug This code: ```python def train_iterable_gen(): images = np.array(load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg").resize((128, 128))) yield { "images": np.expand_dims(images, axis=0), "messages": [ { "role": "user", "content": [{"type": "image", "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" }] }, { "role": "assistant", "content": [{"type": "text", "text": "duck" }] } ] } train_ds = Dataset.from_generator(train_iterable_gen, features=Features({ 'images': [datasets.Image(mode=None, decode=True, id=None)], 'messages': [{'content': [{'text': datasets.Value(dtype='string', id=None), 'type': datasets.Value(dtype='string', id=None) }], 'role': datasets.Value(dtype='string', id=None)}] } ) ) ``` works as I'd expect; if I iterate the dataset then the `images` column returns a `List[PIL.Image.Image]`, i.e. `'images': [<PIL.PngImagePlugin.PngImageFile image mode=RGB size=128x128 at 0x77EFB7EF4680>]`. But if I change `Dataset` to `IterableDataset`, the `images` column changes into `'images': [{'path': None, 'bytes': ..]` ### Steps to reproduce the bug The code above + ```python def load_image(url): response = requests.get(url) image = Image.open(io.BytesIO(response.content)) return image ``` I'm feeding it to SFTTrainer ### Expected behavior Dataset and IterableDataset would behave the same ### Environment info ```yaml requires-python = ">=3.12" dependencies = [ "av>=14.1.0", "boto3>=1.36.7", "datasets>=3.3.2", "docker>=7.1.0", "google-cloud-storage>=2.19.0", "grpcio>=1.70.0", "grpcio-tools>=1.70.0", "moviepy>=2.1.2", "open-clip-torch>=2.31.0", "opencv-python>=4.11.0.86; sys_platform == 'darwin'", "opencv-python-headless>=4.11.0.86; sys_platform == 'linux'", "pandas>=2.2.3", "pillow>=10.4.0", "plotly>=6.0.0", "py-spy>=0.4.0", "pydantic>=2.10.6", "pydantic-settings>=2.7.1", "pymysql>=1.1.1", "ray[data,default,serve,train,tune]>=2.43.0", "torch>=2.6.0", "torchmetrics>=1.6.1", "torchvision>=0.21.0", "transformers[torch]@git+https://github.com/huggingface/transformers", "wandb>=0.19.4", # https://github.com/Dao-AILab/flash-attention/issues/833 "flash-attn @ https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.3/flash_attn-2.7.3+cu12torch2.6cxx11abiFALSE-cp312-cp312-linux_x86_64.whl; sys_platform == 'linux'", "trl@https://github.com/huggingface/trl.git", "peft>=0.14.0", ] ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/1288009?v=4", "events_url": "https://api.github.com/users/FredrikNoren/events{/privacy}", "followers_url": "https://api.github.com/users/FredrikNoren/followers", "following_url": "https://api.github.com/users/FredrikNoren/following{/other_user}", "gists_url": "https://api.github.com/users/FredrikNoren/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FredrikNoren", "id": 1288009, "login": "FredrikNoren", "node_id": "MDQ6VXNlcjEyODgwMDk=", "organizations_url": "https://api.github.com/users/FredrikNoren/orgs", "received_events_url": "https://api.github.com/users/FredrikNoren/received_events", "repos_url": "https://api.github.com/users/FredrikNoren/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FredrikNoren/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FredrikNoren/subscriptions", "type": "User", "url": "https://api.github.com/users/FredrikNoren", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7461/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7461/timeline
null
completed
null
null
16.964722
241
https://api.github.com/repos/huggingface/datasets/issues/7458
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7458/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7458/comments
https://api.github.com/repos/huggingface/datasets/issues/7458/events
https://github.com/huggingface/datasets/issues/7458
2,925,403,528
I_kwDODunzps6uXh2I
7,458
Loading the `laion/filtered-wit` dataset in streaming mode fails on v3.4.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/23343961?v=4", "events_url": "https://api.github.com/users/nikita-savelyevv/events{/privacy}", "followers_url": "https://api.github.com/users/nikita-savelyevv/followers", "following_url": "https://api.github.com/users/nikita-savelyevv/following{/other_user}", "gists_url": "https://api.github.com/users/nikita-savelyevv/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nikita-savelyevv", "id": 23343961, "login": "nikita-savelyevv", "node_id": "MDQ6VXNlcjIzMzQzOTYx", "organizations_url": "https://api.github.com/users/nikita-savelyevv/orgs", "received_events_url": "https://api.github.com/users/nikita-savelyevv/received_events", "repos_url": "https://api.github.com/users/nikita-savelyevv/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nikita-savelyevv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nikita-savelyevv/subscriptions", "type": "User", "url": "https://api.github.com/users/nikita-savelyevv", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
null
[ "thanks for reporting, I released 3.4.1 with a fix" ]
2025-03-17T14:54:02Z
2025-03-17T16:02:04Z
2025-03-17T15:25:55Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Loading https://huggingface.co/datasets/laion/filtered-wit in streaming mode fails after update to `datasets==3.4.0`. The dataset loads fine on v3.3.2. ### Steps to reproduce the bug Steps to reproduce: ``` pip install datastes==3.4.0 python -c "from datasets import load_dataset; load_dataset('laion/filtered-wit', split='train', streaming=True)" ``` Results in: ``` $ python -c "from datasets import load_dataset; load_dataset('laion/filtered-wit', split='train', streaming=True)" Repo card metadata block was not found. Setting CardData to empty. Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 560/560 [00:00<00:00, 2280.24it/s] Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/load.py", line 2080, in load_dataset return builder_instance.as_streaming_dataset(split=split) File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/builder.py", line 1265, in as_streaming_dataset splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)} File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 49, in _split_generators data_files = dl_manager.download_and_extract(self.config.data_files) File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 169, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 121, in extract urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True) File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 496, in map_nested mapped = [ File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 497, in <listcomp> map_nested( File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 513, in map_nested mapped = [ File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 514, in <listcomp> _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 375, in _single_map_nested return function(data_struct) File "/home/nsavel/venvs/tmp/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 131, in _extract raise NotImplementedError( NotImplementedError: Extraction protocol for TAR archives like 'hf://datasets/laion/filtered-wit@c38ca7464e9934d9a49f88b3f60f5ad63b245465/data/00000.tar' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead. Example usage: url = dl_manager.download(url) tar_archive_iterator = dl_manager.iter_archive(url) for filename, file in tar_archive_iterator: ... ``` ### Expected behavior Dataset loads successfully. ### Environment info Ubuntu 20.04.6. Python 3.9. Datasets 3.4.0. pip freeze: ``` aiohappyeyeballs==2.6.1 aiohttp==3.11.14 aiosignal==1.3.2 async-timeout==5.0.1 attrs==25.3.0 certifi==2025.1.31 charset-normalizer==3.4.1 datasets==3.4.0 dill==0.3.8 filelock==3.18.0 frozenlist==1.5.0 fsspec==2024.12.0 huggingface-hub==0.29.3 idna==3.10 multidict==6.1.0 multiprocess==0.70.16 numpy==2.0.2 packaging==24.2 pandas==2.2.3 propcache==0.3.0 pyarrow==19.0.1 python-dateutil==2.9.0.post0 pytz==2025.1 PyYAML==6.0.2 requests==2.32.3 six==1.17.0 tqdm==4.67.1 typing_extensions==4.12.2 tzdata==2025.1 urllib3==2.3.0 xxhash==3.5.0 yarl==1.18.3 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7458/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7458/timeline
null
completed
null
null
0.531389
244
https://api.github.com/repos/huggingface/datasets/issues/7457
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7457/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7457/comments
https://api.github.com/repos/huggingface/datasets/issues/7457/events
https://github.com/huggingface/datasets/issues/7457
2,924,886,467
I_kwDODunzps6uVjnD
7,457
Document the HF_DATASETS_CACHE env variable
{ "avatar_url": "https://avatars.githubusercontent.com/u/92166725?v=4", "events_url": "https://api.github.com/users/LSerranoPEReN/events{/privacy}", "followers_url": "https://api.github.com/users/LSerranoPEReN/followers", "following_url": "https://api.github.com/users/LSerranoPEReN/following{/other_user}", "gists_url": "https://api.github.com/users/LSerranoPEReN/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LSerranoPEReN", "id": 92166725, "login": "LSerranoPEReN", "node_id": "U_kgDOBX5aRQ", "organizations_url": "https://api.github.com/users/LSerranoPEReN/orgs", "received_events_url": "https://api.github.com/users/LSerranoPEReN/received_events", "repos_url": "https://api.github.com/users/LSerranoPEReN/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LSerranoPEReN/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LSerranoPEReN/subscriptions", "type": "User", "url": "https://api.github.com/users/LSerranoPEReN", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/129883215?v=4", "events_url": "https://api.github.com/users/Harry-Yang0518/events{/privacy}", "followers_url": "https://api.github.com/users/Harry-Yang0518/followers", "following_url": "https://api.github.com/users/Harry-Yang0518/following{/other_user}", "gists_url": "https://api.github.com/users/Harry-Yang0518/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Harry-Yang0518", "id": 129883215, "login": "Harry-Yang0518", "node_id": "U_kgDOB73cTw", "organizations_url": "https://api.github.com/users/Harry-Yang0518/orgs", "received_events_url": "https://api.github.com/users/Harry-Yang0518/received_events", "repos_url": "https://api.github.com/users/Harry-Yang0518/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Harry-Yang0518/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Harry-Yang0518/subscriptions", "type": "User", "url": "https://api.github.com/users/Harry-Yang0518", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/129883215?v=4", "events_url": "https://api.github.com/users/Harry-Yang0518/events{/privacy}", "followers_url": "https://api.github.com/users/Harry-Yang0518/followers", "following_url": "https://api.github.com/users/Harry-Yang0518/following{/other_user}", "gists_url": "https://api.github.com/users/Harry-Yang0518/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Harry-Yang0518", "id": 129883215, "login": "Harry-Yang0518", "node_id": "U_kgDOB73cTw", "organizations_url": "https://api.github.com/users/Harry-Yang0518/orgs", "received_events_url": "https://api.github.com/users/Harry-Yang0518/received_events", "repos_url": "https://api.github.com/users/Harry-Yang0518/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Harry-Yang0518/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Harry-Yang0518/subscriptions", "type": "User", "url": "https://api.github.com/users/Harry-Yang0518", "user_view_type": "public" } ]
null
[ "Strongly agree to this, in addition, I am also suffering to change the cache location similar to other issues (since I changed the environmental variables).\nhttps://github.com/huggingface/datasets/issues/6886", "`HF_DATASETS_CACHE` should be documented there indeed, feel free to open a PR :) ", "Hey, I’d love to work on this issue! Could you assign it to me?", "sure ! you can also comment #self-assign in an issue and a bot assigns you automatically :)" ]
2025-03-17T12:24:50Z
2025-05-06T15:54:39Z
2025-05-06T15:54:39Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request Hello, I have a use case where my team is sharing models and dataset in shared directory to avoid duplication. I noticed that the [cache documentation for datasets](https://huggingface.co/docs/datasets/main/en/cache) only mention the `HF_HOME` environment variable but never the `HF_DATASETS_CACHE`. It should be nice to add `HF_DATASETS_CACHE` to datasets documentation if it's an intended feature. If it's not, I think a depreciation warning would be appreciated. ### Motivation This variable is fully working and similar to what `HF_HUB_CACHE` does for models, so it's nice to know that this exists. This seems to be a quick change to implement. ### Your contribution I could contribute since this is only affecting a small portion of the documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7457/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7457/timeline
null
completed
null
null
1,203.496944
245
https://api.github.com/repos/huggingface/datasets/issues/7449
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7449/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7449/comments
https://api.github.com/repos/huggingface/datasets/issues/7449/events
https://github.com/huggingface/datasets/issues/7449
2,916,235,092
I_kwDODunzps6t0jdU
7,449
Cannot load data with different schemas from different parquet files
{ "avatar_url": "https://avatars.githubusercontent.com/u/39846316?v=4", "events_url": "https://api.github.com/users/li-plus/events{/privacy}", "followers_url": "https://api.github.com/users/li-plus/followers", "following_url": "https://api.github.com/users/li-plus/following{/other_user}", "gists_url": "https://api.github.com/users/li-plus/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/li-plus", "id": 39846316, "login": "li-plus", "node_id": "MDQ6VXNlcjM5ODQ2MzE2", "organizations_url": "https://api.github.com/users/li-plus/orgs", "received_events_url": "https://api.github.com/users/li-plus/received_events", "repos_url": "https://api.github.com/users/li-plus/repos", "site_admin": false, "starred_url": "https://api.github.com/users/li-plus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/li-plus/subscriptions", "type": "User", "url": "https://api.github.com/users/li-plus", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! `load_dataset` expects all the data_files to have the same schema.\n\nMaybe you can try enforcing certain `features` using:\n\n```python\nfeatures = Features({\"conversations\": {'content': Value('string'), 'role': Value('string',)}})\nds = load_dataset(..., features=features)\n```", "Thanks! It works if I explicitly specify all nested fields of the data." ]
2025-03-13T08:14:49Z
2025-03-17T07:27:48Z
2025-03-17T07:27:46Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Cannot load samples with optional fields from different files. The schema cannot be correctly derived. ### Steps to reproduce the bug When I place two samples with an optional field `some_extra_field` within a single parquet file, it can be loaded via `load_dataset`. ```python import pandas as pd from datasets import load_dataset data = [ {'conversations': {'role': 'user', 'content': 'hello'}}, {'conversations': {'role': 'user', 'content': 'hi', 'some_extra_field': 'some_value'}} ] df = pd.DataFrame(data) df.to_parquet('data.parquet') dataset = load_dataset('parquet', data_files='data.parquet', split='train') print(dataset.features) ``` The schema can be derived. `some_extra_field` is set to None for the first row where it is absent. ``` {'conversations': {'content': Value(dtype='string', id=None), 'role': Value(dtype='string', id=None), 'some_extra_field': Value(dtype='string', id=None)}} ``` However, when I separate the samples into different files, it cannot be loaded. ```python import pandas as pd from datasets import load_dataset data1 = [{'conversations': {'role': 'user', 'content': 'hello'}}] pd.DataFrame(data1).to_parquet('data1.parquet') data2 = [{'conversations': {'role': 'user', 'content': 'hi', 'some_extra_field': 'some_value'}}] pd.DataFrame(data2).to_parquet('data2.parquet') dataset = load_dataset('parquet', data_files=['data1.parquet', 'data2.parquet'], split='train') print(dataset.features) ``` Traceback: ``` Traceback (most recent call last): File "/home/tiger/.local/lib/python3.9/site-packages/datasets/builder.py", line 1854, in _prepare_split_single for _, table in generator: File "/home/tiger/.local/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table) File "/home/tiger/.local/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table pa_table = table_cast(pa_table, self.info.features.arrow_schema) File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast return cast_table_to_schema(table, schema) File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 2245, in cast_table_to_schema arrays = [ File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 2246, in <listcomp> cast_array_to_feature( File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 1795, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 1795, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/tiger/.local/lib/python3.9/site-packages/datasets/table.py", line 2108, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}") TypeError: Couldn't cast array of type struct<content: string, role: string, some_extra_field: string> to {'content': Value(dtype='string', id=None), 'role': Value(dtype='string', id=None)} ``` ### Expected behavior Correctly load data with optional fields from different parquet files. ### Environment info - `datasets` version: 3.3.2 - Platform: Linux-5.10.135.bsk.4-amd64-x86_64-with-glibc2.31 - Python version: 3.9.2 - `huggingface_hub` version: 0.28.1 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.3.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/39846316?v=4", "events_url": "https://api.github.com/users/li-plus/events{/privacy}", "followers_url": "https://api.github.com/users/li-plus/followers", "following_url": "https://api.github.com/users/li-plus/following{/other_user}", "gists_url": "https://api.github.com/users/li-plus/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/li-plus", "id": 39846316, "login": "li-plus", "node_id": "MDQ6VXNlcjM5ODQ2MzE2", "organizations_url": "https://api.github.com/users/li-plus/orgs", "received_events_url": "https://api.github.com/users/li-plus/received_events", "repos_url": "https://api.github.com/users/li-plus/repos", "site_admin": false, "starred_url": "https://api.github.com/users/li-plus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/li-plus/subscriptions", "type": "User", "url": "https://api.github.com/users/li-plus", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7449/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7449/timeline
null
completed
null
null
95.215833
253
https://api.github.com/repos/huggingface/datasets/issues/7447
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7447/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7447/comments
https://api.github.com/repos/huggingface/datasets/issues/7447/events
https://github.com/huggingface/datasets/issues/7447
2,915,233,248
I_kwDODunzps6twu3g
7,447
Epochs shortened after resuming mid-epoch with Iterable dataset+StatefulDataloader(persistent_workers=True)
{ "avatar_url": "https://avatars.githubusercontent.com/u/4356534?v=4", "events_url": "https://api.github.com/users/dhruvdcoder/events{/privacy}", "followers_url": "https://api.github.com/users/dhruvdcoder/followers", "following_url": "https://api.github.com/users/dhruvdcoder/following{/other_user}", "gists_url": "https://api.github.com/users/dhruvdcoder/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dhruvdcoder", "id": 4356534, "login": "dhruvdcoder", "node_id": "MDQ6VXNlcjQzNTY1MzQ=", "organizations_url": "https://api.github.com/users/dhruvdcoder/orgs", "received_events_url": "https://api.github.com/users/dhruvdcoder/received_events", "repos_url": "https://api.github.com/users/dhruvdcoder/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dhruvdcoder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhruvdcoder/subscriptions", "type": "User", "url": "https://api.github.com/users/dhruvdcoder", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Thanks for reporting ! Maybe we should store the epoch in the state_dict, and then when the dataset is iterated on again after setting a new epoch it should restart from scratch instead of resuming ? wdyt ?", "But why does this only happen when `persistent_workers=True`? I would expect it to work correctly even without storing the epoch number in the state_dict of the iterable dataset. ", "I think persistent_workers=False simply ignores the dataset state_dict when it starts a new epoch, that's why the issue doesn't appear in that case", "I opened https://github.com/huggingface/datasets/pull/7451 to fix the issue, let me know if it works for you", "I just released `datasets` 3.4 that includes the fix :)\n\nPS: in your script you probably want to set the epoch like this, otherwise it's still set to 0 after the first epoch:\n\n```diff\n if state_dict is None:\n- ds.set_epoch(epoch)\n epoch += 1\n+ ds.set_epoch(epoch)\n```" ]
2025-03-12T21:41:05Z
2025-03-14T17:26:59Z
2025-03-14T10:50:10Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When `torchdata.stateful_dataloader.StatefulDataloader(persistent_workers=True)` the epochs after resuming only iterate through the examples that were left in the epoch when the training was interrupted. For example, in the script below training is interrupted on step 124 (epoch 1) when 3 batches are left. Then after resuming, the rest of epochs (2 and 3) only iterate through these 3 batches. ### Steps to reproduce the bug Run the following script with and with PERSISTENT_WORKERS=true. ```python # !/usr/bin/env python3 # torch==2.5.1 # datasets==3.3.2 # torchdata>=0.9.0 import datasets import pprint from torchdata.stateful_dataloader import StatefulDataLoader import os PERSISTENT_WORKERS = ( os.environ.get("PERSISTENT_WORKERS", "False").lower() == "true" ) # PERSISTENT_WORKERS = True # Incorrect resume # ds = datasets.load_from_disk("dataset").to_iterable_dataset(num_shards=4) def generator(): for i in range(128): yield {"x": i} ds = datasets.Dataset.from_generator( generator, features=datasets.Features({"x": datasets.Value("int32")}) ).to_iterable_dataset(num_shards=4) dl = StatefulDataLoader( ds, batch_size=2, num_workers=2, persistent_workers=PERSISTENT_WORKERS ) global_step = 0 epoch = 0 ds_state_dict = None state_dict = None resumed = False while True: if epoch >= 3: break if state_dict is not None: dl.load_state_dict(state_dict) state_dict = None ds_state_dict = None resumed = True print("resumed") for i, batch in enumerate(dl): print(f"epoch: {epoch}, global_step: {global_step}, batch: {batch}") global_step += 1 # consume datapoint # simulate error if global_step == 124 and not resumed: ds_state_dict = ds.state_dict() state_dict = dl.state_dict() print("checkpoint") print("ds_state_dict") pprint.pprint(ds_state_dict) print("dl_state_dict") pprint.pprint(state_dict) break if state_dict is None: ds.set_epoch(epoch) epoch += 1 ``` The script checkpoints when there are three batches left in the second epoch. After resuming, only the last three batches are repeated in the rest of the epochs. If it helps, following are the two state_dicts for the dataloader save at the same step with the two settings. The left one is for `PERSISTENT_WORKERS=False` ![Image](https://github.com/user-attachments/assets/c97d6502-d7bd-4ef4-ae2d-66fe1a9732b1) ### Expected behavior All the elements in the dataset should be iterated through in the epochs following the one where we resumed. The expected behavior can be seen by setting `PERSISTENT_WORKERS=False`. ### Environment info torch==2.5.1 datasets==3.3.2 torchdata>=0.9.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7447/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7447/timeline
null
completed
null
null
37.151389
255
https://api.github.com/repos/huggingface/datasets/issues/7433
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7433/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7433/comments
https://api.github.com/repos/huggingface/datasets/issues/7433/events
https://github.com/huggingface/datasets/issues/7433
2,890,240,400
I_kwDODunzps6sRZGQ
7,433
`Dataset.map` ignores existing caches and remaps when ran with different `num_proc`
{ "avatar_url": "https://avatars.githubusercontent.com/u/27844407?v=4", "events_url": "https://api.github.com/users/ringohoffman/events{/privacy}", "followers_url": "https://api.github.com/users/ringohoffman/followers", "following_url": "https://api.github.com/users/ringohoffman/following{/other_user}", "gists_url": "https://api.github.com/users/ringohoffman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ringohoffman", "id": 27844407, "login": "ringohoffman", "node_id": "MDQ6VXNlcjI3ODQ0NDA3", "organizations_url": "https://api.github.com/users/ringohoffman/orgs", "received_events_url": "https://api.github.com/users/ringohoffman/received_events", "repos_url": "https://api.github.com/users/ringohoffman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ringohoffman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ringohoffman/subscriptions", "type": "User", "url": "https://api.github.com/users/ringohoffman", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "This feels related: https://github.com/huggingface/datasets/issues/3044", "@lhoestq This comment specifically, I agree:\n\n* https://github.com/huggingface/datasets/issues/3044#issuecomment-1239877570\n\n> Almost a year later and I'm in a similar boat. Using custom fingerprints and when using multiprocessing the cached datasets are saved with a template at the end of the filename (something like \"000001_of_000008\" for every process of num_proc). So if in the next time you run the script you set num_proc to a different number, the cache cannot be used.\n> \n> Is there any way to get around this? I am processing a huge dataset so I do the processing on one machine and then transfer the processed data to another in its cache dir but currently that's not possible due to num_proc mismatch.\n\n" ]
2025-03-03T05:51:26Z
2025-05-12T15:14:09Z
2025-05-12T15:14:09Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug If you `map` a dataset and save it to a specific `cache_file_name` with a specific `num_proc`, and then call map again with that same existing `cache_file_name` but a different `num_proc`, the dataset will be re-mapped. ### Steps to reproduce the bug 1. Download a dataset ```python import datasets dataset = datasets.load_dataset("ylecun/mnist") ``` ``` Generating train split: 100%|██████████| 60000/60000 [00:00<00:00, 116429.85 examples/s] Generating test split: 100%|██████████| 10000/10000 [00:00<00:00, 103310.27 examples/s] ``` 2. `map` and cache it with a specific `num_proc` ```python cache_file_name="./cache/train.map" dataset["train"].map(lambda x: x, cache_file_name=cache_file_name, num_proc=2) ``` ``` Map (num_proc=2): 100%|██████████| 60000/60000 [00:01<00:00, 53764.03 examples/s] ``` 3. `map` it with a different `num_proc` and the same `cache_file_name` as before ```python dataset["train"].map(lambda x: x, cache_file_name=cache_file_name, num_proc=3) ``` ``` Map (num_proc=3): 100%|██████████| 60000/60000 [00:00<00:00, 65377.12 examples/s] ``` ### Expected behavior If I specify an existing `cache_file_name`, I don't expect using a different `num_proc` than the one that was used to generate it to cause the dataset to have be be re-mapped. ### Environment info ```console $ datasets-cli env - `datasets` version: 3.3.2 - Platform: Linux-5.15.0-131-generic-x86_64-with-glibc2.35 - Python version: 3.10.16 - `huggingface_hub` version: 0.29.1 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7433/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7433/timeline
null
completed
null
null
1,689.378611
269
https://api.github.com/repos/huggingface/datasets/issues/7430
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7430/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7430/comments
https://api.github.com/repos/huggingface/datasets/issues/7430/events
https://github.com/huggingface/datasets/issues/7430
2,886,922,573
I_kwDODunzps6sEvFN
7,430
Error in code "Time to slice and dice" from course "NLP Course"
{ "avatar_url": "https://avatars.githubusercontent.com/u/122965300?v=4", "events_url": "https://api.github.com/users/Yurkmez/events{/privacy}", "followers_url": "https://api.github.com/users/Yurkmez/followers", "following_url": "https://api.github.com/users/Yurkmez/following{/other_user}", "gists_url": "https://api.github.com/users/Yurkmez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Yurkmez", "id": 122965300, "login": "Yurkmez", "node_id": "U_kgDOB1RNNA", "organizations_url": "https://api.github.com/users/Yurkmez/orgs", "received_events_url": "https://api.github.com/users/Yurkmez/received_events", "repos_url": "https://api.github.com/users/Yurkmez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Yurkmez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Yurkmez/subscriptions", "type": "User", "url": "https://api.github.com/users/Yurkmez", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "You should open an issue in the NLP course website / github page. I'm closing this issue if you don't mind", "ok, i don't mind, i'll mark the error there" ]
2025-02-28T11:36:10Z
2025-03-05T11:32:47Z
2025-03-03T17:52:15Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When we execute code ``` frequencies = ( train_df["condition"] .value_counts() .to_frame() .reset_index() .rename(columns={"index": "condition", "condition": "frequency"}) ) frequencies.head() ``` answer should be like this condition | frequency birth control | 27655 depression | 8023 acne | 5209 anxiety | 4991 pain | 4744 but he is different frequency | count birth control | 27655 depression | 8023 acne | 5209 anxiety | 4991 pain | 4744 this is not correct, correct code ``` frequencies = ( train_df["condition"] .value_counts() .to_frame() .reset_index() .rename(columns={"index": "condition", "count": "frequency"}) ) ```` ### Steps to reproduce the bug ``` frequencies = ( train_df["condition"] .value_counts() .to_frame() .reset_index() .rename(columns={"index": "condition", "condition": "frequency"}) ) frequencies.head() ``` ### Expected behavior condition | frequency birth control | 27655 depression | 8023 acne | 5209 anxiety | 4991 pain | 4744 ### Environment info Google Colab
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7430/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7430/timeline
null
completed
null
null
78.268056
272
https://api.github.com/repos/huggingface/datasets/issues/7406
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7406/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7406/comments
https://api.github.com/repos/huggingface/datasets/issues/7406/events
https://github.com/huggingface/datasets/issues/7406
2,856,441,206
I_kwDODunzps6qQdV2
7,406
Adding Core Maintainer List to CONTRIBUTING.md
{ "avatar_url": "https://avatars.githubusercontent.com/u/93233241?v=4", "events_url": "https://api.github.com/users/jp1924/events{/privacy}", "followers_url": "https://api.github.com/users/jp1924/followers", "following_url": "https://api.github.com/users/jp1924/following{/other_user}", "gists_url": "https://api.github.com/users/jp1924/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jp1924", "id": 93233241, "login": "jp1924", "node_id": "U_kgDOBY6gWQ", "organizations_url": "https://api.github.com/users/jp1924/orgs", "received_events_url": "https://api.github.com/users/jp1924/received_events", "repos_url": "https://api.github.com/users/jp1924/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jp1924/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jp1924/subscriptions", "type": "User", "url": "https://api.github.com/users/jp1924", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "@lhoestq", "there is no per-module maintainer and the list is me alone nowadays ^^'", "@lhoestq \nOh... I feel for you. \nWhat are your criteria for choosing a core maintainer? \nIt seems like it's too much work for you to manage all this code by yourself.\n\nAlso, if you don't mind, can you check this PR for me?\n#7368 I'd like this to be added as soon as possible because I need it." ]
2025-02-17T00:32:40Z
2025-03-24T10:57:54Z
2025-03-24T10:57:54Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request I propose adding a core maintainer list to the `CONTRIBUTING.md` file. ### Motivation The Transformers and Liger-Kernel projects maintain lists of core maintainers for each module. However, the Datasets project doesn't have such a list. ### Your contribution I have nothing to add here.
{ "avatar_url": "https://avatars.githubusercontent.com/u/93233241?v=4", "events_url": "https://api.github.com/users/jp1924/events{/privacy}", "followers_url": "https://api.github.com/users/jp1924/followers", "following_url": "https://api.github.com/users/jp1924/following{/other_user}", "gists_url": "https://api.github.com/users/jp1924/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jp1924", "id": 93233241, "login": "jp1924", "node_id": "U_kgDOBY6gWQ", "organizations_url": "https://api.github.com/users/jp1924/orgs", "received_events_url": "https://api.github.com/users/jp1924/received_events", "repos_url": "https://api.github.com/users/jp1924/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jp1924/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jp1924/subscriptions", "type": "User", "url": "https://api.github.com/users/jp1924", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7406/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7406/timeline
null
completed
null
null
850.420556
295
https://api.github.com/repos/huggingface/datasets/issues/7404
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7404/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7404/comments
https://api.github.com/repos/huggingface/datasets/issues/7404/events
https://github.com/huggingface/datasets/issues/7404
2,856,366,207
I_kwDODunzps6qQLB_
7,404
Performance regression in `dataset.filter`
{ "avatar_url": "https://avatars.githubusercontent.com/u/82200?v=4", "events_url": "https://api.github.com/users/ttim/events{/privacy}", "followers_url": "https://api.github.com/users/ttim/followers", "following_url": "https://api.github.com/users/ttim/following{/other_user}", "gists_url": "https://api.github.com/users/ttim/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ttim", "id": 82200, "login": "ttim", "node_id": "MDQ6VXNlcjgyMjAw", "organizations_url": "https://api.github.com/users/ttim/orgs", "received_events_url": "https://api.github.com/users/ttim/received_events", "repos_url": "https://api.github.com/users/ttim/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ttim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ttim/subscriptions", "type": "User", "url": "https://api.github.com/users/ttim", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" } ]
null
[ "Thanks for reporting, I'll fix the regression today", "I just released `datasets` 3.3.1 with a fix, let me know if it's good now :)", "@lhoestq it fixed the issue.\n\nThis was (very) fast, thank you very much!" ]
2025-02-16T22:19:14Z
2025-02-17T17:46:06Z
2025-02-17T14:28:48Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug We're filtering dataset of ~1M (small-ish) records. At some point in the code we do `dataset.filter`, before (including 3.2.0) it was taking couple of seconds, and now it takes 4 hours. We use 16 threads/workers, and stack trace at them look as follows: ``` Traceback (most recent call last): File "/python/lib/python3.12/site-packages/multiprocess/process.py", line 314, in _bootstrap self.run() File "/python/lib/python3.12/site-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/python/lib/python3.12/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) ^^^^^^^^^^^^^^^^^^^ File "/python/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 678, in _write_generator_to_queue for i, result in enumerate(func(**kwargs)): File "/python/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3511, in _map_single for i, batch in iter_outputs(shard_iterable): File "/python/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3461, in iter_outputs yield i, apply_function(example, i, offset=offset) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/python/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 3390, in apply_function processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/python/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 6416, in get_indices_from_mask_function indices_array = indices_mapping.column(0).take(indices_array) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/table.pxi", line 1079, in pyarrow.lib.ChunkedArray.take File "/python/lib/python3.12/site-packages/pyarrow/compute.py", line 458, in take def take(data, indices, *, boundscheck=True, memory_pool=None): ``` ### Steps to reproduce the bug 1. Save dataset of 1M records in arrow 2. Filter it with 16 threads 3. Watch it take too long ### Expected behavior Filtering done fast ### Environment info datasets 3.3.0, python 3.12
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7404/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7404/timeline
null
completed
null
null
16.159444
297
https://api.github.com/repos/huggingface/datasets/issues/7389
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7389/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7389/comments
https://api.github.com/repos/huggingface/datasets/issues/7389/events
https://github.com/huggingface/datasets/issues/7389
2,843,592,606
I_kwDODunzps6pfcee
7,389
Getting statistics about filtered examples
{ "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanasdf", "id": 511073, "login": "jonathanasdf", "node_id": "MDQ6VXNlcjUxMTA3Mw==", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanasdf", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "You can actually track a running sum in map() or filter() :)\n\n```python\nnum_filtered = 0\n\ndef f(x):\n global num_filtered\n condition = len(x[\"text\"]) < 1000\n if not condition:\n num_filtered += 1\n return condition\n\nds = ds.filter(f)\nprint(num_filtered)\n```\n\nand if you want to use multiprocessing, make sure to use a variable that is shared across processes\n\n\n```python\nfrom multiprocess import Manager\n\nmanager = Manager()\nnum_filtered = manager.Value('i', 0)\n\ndef f(x):\n global num_filtered\n condition = len(x[\"text\"]) < 1000\n if not condition:\n num_filtered.value += 1\n return condition\n\nds = ds.filter(f, num_proc=4)\nprint(num_filtered.value)\n```\n\nPS: `datasets` uses `multiprocess` instead of the `multiprocessing` package to support lambda functions in map() and filter()", "Oh that's great to know!\n\nI guess this value would not be exactly synced with the batch in cases of pre-fetch and shuffle buffers and so on, but that's probably fine. Thanks!" ]
2025-02-10T20:48:29Z
2025-02-11T20:44:15Z
2025-02-11T20:44:13Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
@lhoestq wondering if the team has thought about this and if there are any recommendations? Currently when processing datasets some examples are bound to get filtered out, whether it's due to bad format, or length is too long, or any other custom filters that might be getting applied. Let's just focus on the filter by length for now, since that would be something that gets applied dynamically for each training run. Say we want to show a graph in W&B with the running total of the number of filtered examples so far. What would be a good way to go about hooking this up? Because the map/filter operations happen before the DataLoader batches are created, at training time if we're just grabbing batches from the DataLoader then we won't know how many things have been filtered already. But there's not really a good way to include a 'num_filtered' key into the dataset itself either because dataset map/filter process examples independently and don't have a way to track a running sum. The only approach I can kind of think of is having a 'is_filtered' key in the dataset, and then creating a custom batcher/collator that reads that and tracks the metric?
{ "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanasdf", "id": 511073, "login": "jonathanasdf", "node_id": "MDQ6VXNlcjUxMTA3Mw==", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanasdf", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7389/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7389/timeline
null
completed
null
null
23.928889
311
https://api.github.com/repos/huggingface/datasets/issues/7388
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7388/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7388/comments
https://api.github.com/repos/huggingface/datasets/issues/7388/events
https://github.com/huggingface/datasets/issues/7388
2,843,188,499
I_kwDODunzps6pd50T
7,388
OSError: [Errno 22] Invalid argument forbidden character
{ "avatar_url": "https://avatars.githubusercontent.com/u/124634542?v=4", "events_url": "https://api.github.com/users/langflogit/events{/privacy}", "followers_url": "https://api.github.com/users/langflogit/followers", "following_url": "https://api.github.com/users/langflogit/following{/other_user}", "gists_url": "https://api.github.com/users/langflogit/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/langflogit", "id": 124634542, "login": "langflogit", "node_id": "U_kgDOB23Frg", "organizations_url": "https://api.github.com/users/langflogit/orgs", "received_events_url": "https://api.github.com/users/langflogit/received_events", "repos_url": "https://api.github.com/users/langflogit/repos", "site_admin": false, "starred_url": "https://api.github.com/users/langflogit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/langflogit/subscriptions", "type": "User", "url": "https://api.github.com/users/langflogit", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "You can probably copy the dataset in your HF account and rename the files (without having to download them to your disk). Or alternatively feel free to open a Pull Request to this dataset with the renamed file", "Thank you, that will help me work around this problem" ]
2025-02-10T17:46:31Z
2025-02-11T13:42:32Z
2025-02-11T13:42:30Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug I'm on Windows and i'm trying to load a datasets but i'm having title error because files in the repository are named with charactere like < >which can't be in a name file. Could it be possible to load this datasets but removing those charactere ? ### Steps to reproduce the bug load_dataset("CATMuS/medieval") on Windows ### Expected behavior Making the function to erase the forbidden character to allow loading the datasets who have those characters. ### Environment info - `datasets` version: 3.2.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.12.2 - `huggingface_hub` version: 0.28.1 - PyArrow version: 19.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/124634542?v=4", "events_url": "https://api.github.com/users/langflogit/events{/privacy}", "followers_url": "https://api.github.com/users/langflogit/followers", "following_url": "https://api.github.com/users/langflogit/following{/other_user}", "gists_url": "https://api.github.com/users/langflogit/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/langflogit", "id": 124634542, "login": "langflogit", "node_id": "U_kgDOB23Frg", "organizations_url": "https://api.github.com/users/langflogit/orgs", "received_events_url": "https://api.github.com/users/langflogit/received_events", "repos_url": "https://api.github.com/users/langflogit/repos", "site_admin": false, "starred_url": "https://api.github.com/users/langflogit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/langflogit/subscriptions", "type": "User", "url": "https://api.github.com/users/langflogit", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7388/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7388/timeline
null
completed
null
null
19.933056
312
https://api.github.com/repos/huggingface/datasets/issues/7386
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7386/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7386/comments
https://api.github.com/repos/huggingface/datasets/issues/7386/events
https://github.com/huggingface/datasets/issues/7386
2,840,032,524
I_kwDODunzps6pR3UM
7,386
Add bookfolder Dataset Builder for Digital Book Formats
{ "avatar_url": "https://avatars.githubusercontent.com/u/22115108?v=4", "events_url": "https://api.github.com/users/shikanime/events{/privacy}", "followers_url": "https://api.github.com/users/shikanime/followers", "following_url": "https://api.github.com/users/shikanime/following{/other_user}", "gists_url": "https://api.github.com/users/shikanime/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shikanime", "id": 22115108, "login": "shikanime", "node_id": "MDQ6VXNlcjIyMTE1MTA4", "organizations_url": "https://api.github.com/users/shikanime/orgs", "received_events_url": "https://api.github.com/users/shikanime/received_events", "repos_url": "https://api.github.com/users/shikanime/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shikanime/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shikanime/subscriptions", "type": "User", "url": "https://api.github.com/users/shikanime", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "On second thought, probably not a good idea." ]
2025-02-08T14:27:55Z
2025-02-08T14:30:10Z
2025-02-08T14:30:09Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request This feature proposes adding a new dataset builder called bookfolder to the datasets library. This builder would allow users to easily load datasets consisting of various digital book formats, including: AZW, AZW3, CB7, CBR, CBT, CBZ, EPUB, MOBI, and PDF. ### Motivation Currently, loading datasets of these digital book files requires manual effort. This would also lower the barrier to entry for working with these formats, enabling more diverse and interesting datasets to be used within the Hugging Face ecosystem. ### Your contribution This feature is rather simple as it will be based on the folder-based builder, similar to imagefolder. I'm willing to contribute to this feature by submitting a PR
{ "avatar_url": "https://avatars.githubusercontent.com/u/22115108?v=4", "events_url": "https://api.github.com/users/shikanime/events{/privacy}", "followers_url": "https://api.github.com/users/shikanime/followers", "following_url": "https://api.github.com/users/shikanime/following{/other_user}", "gists_url": "https://api.github.com/users/shikanime/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shikanime", "id": 22115108, "login": "shikanime", "node_id": "MDQ6VXNlcjIyMTE1MTA4", "organizations_url": "https://api.github.com/users/shikanime/orgs", "received_events_url": "https://api.github.com/users/shikanime/received_events", "repos_url": "https://api.github.com/users/shikanime/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shikanime/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shikanime/subscriptions", "type": "User", "url": "https://api.github.com/users/shikanime", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7386/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7386/timeline
null
completed
null
null
0.037222
314
https://api.github.com/repos/huggingface/datasets/issues/7381
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7381/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7381/comments
https://api.github.com/repos/huggingface/datasets/issues/7381/events
https://github.com/huggingface/datasets/issues/7381
2,815,649,092
I_kwDODunzps6n02VE
7,381
Iterating over values of a column in the IterableDataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/47208659?v=4", "events_url": "https://api.github.com/users/TopCoder2K/events{/privacy}", "followers_url": "https://api.github.com/users/TopCoder2K/followers", "following_url": "https://api.github.com/users/TopCoder2K/following{/other_user}", "gists_url": "https://api.github.com/users/TopCoder2K/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TopCoder2K", "id": 47208659, "login": "TopCoder2K", "node_id": "MDQ6VXNlcjQ3MjA4NjU5", "organizations_url": "https://api.github.com/users/TopCoder2K/orgs", "received_events_url": "https://api.github.com/users/TopCoder2K/received_events", "repos_url": "https://api.github.com/users/TopCoder2K/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TopCoder2K/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TopCoder2K/subscriptions", "type": "User", "url": "https://api.github.com/users/TopCoder2K", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47208659?v=4", "events_url": "https://api.github.com/users/TopCoder2K/events{/privacy}", "followers_url": "https://api.github.com/users/TopCoder2K/followers", "following_url": "https://api.github.com/users/TopCoder2K/following{/other_user}", "gists_url": "https://api.github.com/users/TopCoder2K/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TopCoder2K", "id": 47208659, "login": "TopCoder2K", "node_id": "MDQ6VXNlcjQ3MjA4NjU5", "organizations_url": "https://api.github.com/users/TopCoder2K/orgs", "received_events_url": "https://api.github.com/users/TopCoder2K/received_events", "repos_url": "https://api.github.com/users/TopCoder2K/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TopCoder2K/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TopCoder2K/subscriptions", "type": "User", "url": "https://api.github.com/users/TopCoder2K", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47208659?v=4", "events_url": "https://api.github.com/users/TopCoder2K/events{/privacy}", "followers_url": "https://api.github.com/users/TopCoder2K/followers", "following_url": "https://api.github.com/users/TopCoder2K/following{/other_user}", "gists_url": "https://api.github.com/users/TopCoder2K/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TopCoder2K", "id": 47208659, "login": "TopCoder2K", "node_id": "MDQ6VXNlcjQ3MjA4NjU5", "organizations_url": "https://api.github.com/users/TopCoder2K/orgs", "received_events_url": "https://api.github.com/users/TopCoder2K/received_events", "repos_url": "https://api.github.com/users/TopCoder2K/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TopCoder2K/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TopCoder2K/subscriptions", "type": "User", "url": "https://api.github.com/users/TopCoder2K", "user_view_type": "public" } ]
null
[ "I'd be in favor of that ! I saw many people implementing their own iterables that wrap a dataset just to iterate on a single column, that would make things more practical.\n\nKinda related: https://github.com/huggingface/datasets/issues/5847", "(For anyone's information, I'm going on vacation for the next 3 weeks, so the work is postponed. If anyone can implement this feature within the next 4 weeks, go ahead :) )\n\nUPD from 04/06/25:\nI'm planning to start work on the feature in early May.", "#self-assign", "# Preliminary discussion\n\nIdeally, I would like to be able to operate on a column with [map](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.IterableDataset.map), [filter](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.IterableDataset.filter), [batch](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.IterableDataset.batch) and probably some other `IterableDataset`'s methods, however, the same results can be achieved by using the methods on an `IterableDataset` object and utilizing `__getitem__()` afterwards. Thus, one may not support these methods at first and try to make the implementation as simple as possible.\n\n# Implementation\n\nBased on the preliminary discussion, one can do the following:\n```python\nclass IterableColumn:\n def __init__(self, dataset: \"IterableDataset\", column_name: str):\n self.dataset = dataset\n self.column_name = column_name\n\n def __iter__(self) -> Iterator[Any]:\n for example in self.dataset:\n yield example[self.column_name]\n\n\nclass IterableDataset(DatasetInfoMixin):\n ...\n def __getitem__(self, column_name: str) -> IterableColumn:\n return IterableColumn(self, column_name)\n ...\n```\n\n# Testing\n\nIt works as expected in our simple test:\n```python\ndef gen():\n yield {\"text\": \"Good\", \"label\": 0}\n yield {\"text\": \"Bad\", \"label\": 1}\n\nds = IterableDataset.from_generator(gen)\n\ntexts = ds[\"text\"] # `texts` is an IterableColumn object\nfor v in texts:\n print(v) # Prints \"Good\" and \"Bad\"\nfor v in texts:\n print(v) # Prints \"Good\" and \"Bad\" again\n```\n\n# Questions\n\n1. What do you think about the implementation, @lhoestq?\n2. How to properly test the implementation? I've found [test_iterable_dataset.py](https://github.com/huggingface/datasets/blob/main/tests/test_iterable_dataset.py) but 1) I haven't found any guidelines for testing, 2) the script tests a lot of things while I'd like to test only my feature.", "Sounds great !\n\nRegarding testing, it's actually possible to have your test function in test_iterable_dataset.py, which you can run using\n\n```python\npytest tests/test_iterable_dataset.py::my_function\n```", "> Regarding testing, it's actually possible to have your test function in test_iterable_dataset.py, which you can run using\n\nI hoped not to run `pip install -e \".[dev]\"`, but your answer implies that I should. The problem is that I was unable to install the dependencies with Python 3.13 due to `tensorflow` and with Python 3.11-3.12 due to \"there are no versions of pyav\" [¬º-°]¬ Therefore, I had to test in a separate script file to avoid importing optional dependencies. Anyway, I've opened a PR: https://github.com/huggingface/datasets/pull/7564. Please, take a look (there are questions about the documentation).\n\nMoreover, I want to note that `make style` and `pre-commit` give different results for `test_iterable_dataset.py` (and a couple of files). Example:\n```python\n assert skip_ex_iterable.shuffle_data_sources(np.random.default_rng(42)) is skip_ex_iterable, (\n \"skip examples makes the shards order fixed\"\n )\n```\nvs\n```python\n assert (\n skip_ex_iterable.shuffle_data_sources(np.random.default_rng(42)) is skip_ex_iterable\n ), \"skip examples makes the shards order fixed\"\n```\n ¯\\\\_(ツ)_/¯\n\n> Kinda related: https://github.com/huggingface/datasets/issues/5847\n\nI had forgotten about this, but I've looked at it by now. [This comment](https://github.com/huggingface/datasets/issues/5847#issuecomment-1549799951) implies that `IterableColumn` should support chained indexing, so thank you for pointing this out! Did you mean anything else by referencing the issue?", "> I hoped not to run pip install -e \".[dev]\", but your answer implies that I should. The problem is that I was unable to install the dependencies with Python 3.13 due to tensorflow and with Python 3.11-3.12 due to \"there are no versions of pyav\" [¬º-°]¬ Therefore, I had to test in a separate script file to avoid importing optional dependencies. Anyway, I've opened a PR: https://github.com/huggingface/datasets/pull/7564. Please, take a look (there are questions about the documentation).\n\nwe try to not not require optional dependencies when running tests, so you can try running the tests only with `pytest`, `pytest-datadir` and `pytest-xdist`\n\n> I had forgotten about this, but I've looked at it by now. https://github.com/huggingface/datasets/issues/5847#issuecomment-1549799951 implies that IterableColumn should support chained indexing, so thank you for pointing this out! Did you mean anything else by referencing the issue?\n\nNo I simply referenced the issue because it will enable `pipe(ds[\"column_name\"])`, but no need to support nested fields access in a first step - we can see that later as it's uncommon and would add complexity to the contribution", "> we try to not not require optional dependencies when running tests, so you can try running the tests only with `pytest`, `pytest-datadir` and `pytest-xdist`\n\nUnderstood. If it's necessary to run the tests again, I'll try to install only the mentioned libraries, thank you!\n\n> No I simply referenced the issue because it will enable pipe(ds[\"column_name\"]), but no need to support nested fields access in a first step - we can see that later as it's uncommon and would add complexity to the contribution\n\nAh, I see. Anyway, I've already implemented chained indexing (it was relatively easy).\n\n@lhoestq, could you please take a look at the PR and answer [questions](https://github.com/huggingface/datasets/pull/7564#issuecomment-2863391781) there?", "> so you can try running the tests only with pytest, pytest-datadir and pytest-xdist\n\nYes, they are sufficient. There was one more problem with Python 3.12 and `distutils` that were removed, but I just downgraded to 3.11 and successfully ran `test_iterable_dataset.py`.", "@lhoestq, could you write in the [discussion](https://discuss.huggingface.co/t/how-to-iterate-over-values-of-a-column-in-the-iterabledataset/135649) for people coming there from the Internet that the feature has been implemented? I could do it by myself but the topic is closed to me.", "done, thanks you !" ]
2025-01-28T13:17:36Z
2025-05-22T18:00:04Z
2025-05-22T18:00:04Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request I would like to be able to iterate (and re-iterate if needed) over a column of an `IterableDataset` instance. The following example shows the supposed API: ```python def gen(): yield {"text": "Good", "label": 0} yield {"text": "Bad", "label": 1} ds = IterableDataset.from_generator(gen) texts = ds["text"] for v in texts: print(v) # Prints "Good" and "Bad" for v in texts: print(v) # Prints "Good" and "Bad" again ``` ### Motivation In the real world problems, huge NNs like Transformer are not always the best option, so there is a need to conduct experiments with different methods. While 🤗Datasets is perfectly adapted to 🤗Transformers, it may be inconvenient when being used with other libraries. The ability to retrieve a particular column is the case (e.g., gensim's FastText [requires](https://radimrehurek.com/gensim/models/fasttext.html#gensim.models.fasttext.FastText.train) only lists of strings, not dictionaries). While there are ways to achieve the desired functionality, they are not good ([forum](https://discuss.huggingface.co/t/how-to-iterate-over-values-of-a-column-in-the-iterabledataset/135649)). It would be great if there was a built-in solution. ### Your contribution Theoretically, I can submit a PR, but I have very little knowledge of the internal structure of 🤗Datasets, so some help may be needed. Moreover, I can only work on weekends, since I have a full-time job. However, the feature does not seem to be popular, so there is no need to implement it as fast as possible.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47208659?v=4", "events_url": "https://api.github.com/users/TopCoder2K/events{/privacy}", "followers_url": "https://api.github.com/users/TopCoder2K/followers", "following_url": "https://api.github.com/users/TopCoder2K/following{/other_user}", "gists_url": "https://api.github.com/users/TopCoder2K/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TopCoder2K", "id": 47208659, "login": "TopCoder2K", "node_id": "MDQ6VXNlcjQ3MjA4NjU5", "organizations_url": "https://api.github.com/users/TopCoder2K/orgs", "received_events_url": "https://api.github.com/users/TopCoder2K/received_events", "repos_url": "https://api.github.com/users/TopCoder2K/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TopCoder2K/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TopCoder2K/subscriptions", "type": "User", "url": "https://api.github.com/users/TopCoder2K", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7381/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7381/timeline
null
completed
null
null
2,740.707778
318
https://api.github.com/repos/huggingface/datasets/issues/7364
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7364/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7364/comments
https://api.github.com/repos/huggingface/datasets/issues/7364/events
https://github.com/huggingface/datasets/issues/7364
2,776,929,268
I_kwDODunzps6lhJP0
7,364
API endpoints for gated dataset access requests
{ "avatar_url": "https://avatars.githubusercontent.com/u/6140840?v=4", "events_url": "https://api.github.com/users/jerome-white/events{/privacy}", "followers_url": "https://api.github.com/users/jerome-white/followers", "following_url": "https://api.github.com/users/jerome-white/following{/other_user}", "gists_url": "https://api.github.com/users/jerome-white/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jerome-white", "id": 6140840, "login": "jerome-white", "node_id": "MDQ6VXNlcjYxNDA4NDA=", "organizations_url": "https://api.github.com/users/jerome-white/orgs", "received_events_url": "https://api.github.com/users/jerome-white/received_events", "repos_url": "https://api.github.com/users/jerome-white/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jerome-white/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerome-white/subscriptions", "type": "User", "url": "https://api.github.com/users/jerome-white", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Looks like a [similar feature request](https://github.com/huggingface/huggingface_hub/issues/1198) was made to the HF Hub team. Is handling this at the Hub level more appropriate?\r\n\r\n(As an aside, I've gotten the [HTTP-based solution](https://github.com/huggingface/huggingface_hub/issues/1198#issuecomment-1905774983) proposed in that forum to work for simple cases.)", "yes it's more for https://github.com/huggingface/huggingface_hub cc @hanouticelina ", "yes i think @Wauplin's comment on that thread is still what we recommend" ]
2025-01-09T06:21:20Z
2025-01-09T11:17:40Z
2025-01-09T11:17:20Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request I would like a programatic way of requesting access to gated datasets. The current solution to gain access forces me to visit a website and physically click an "agreement" button (as per the [documentation](https://huggingface.co/docs/hub/en/datasets-gated#access-gated-datasets-as-a-user)). An ideal approach would be HF API download methods that negotiate access on my behalf based on information from my CLI login and/or token. I realise that may be naive given the various types of access semantics available to dataset authors (automatic versus manual approval, for example) and complexities it might add to existing methods, but something along those lines would be nice. Perhaps using the `*_access_request` methods available to dataset authors can be a precedent; see [`reject_access_request`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.reject_access_request) for example. ### Motivation When trying to download files from a gated dataset, I'm met with a `GatedRepoError` and instructed to visit the repository's website to gain access: ``` Cannot access gated repo for url https://huggingface.co/datasets/open-llm-leaderboard/meta-llama__Meta-Llama-3.1-70B-Instruct-details/resolve/main/meta-llama__Meta-Llama-3.1-70B-Instruct/samples_leaderboard_math_precalculus_hard_2024-07-19T18-47-29.522341.jsonl. Access to dataset open-llm-leaderboard/meta-llama__Meta-Llama-3.1-70B-Instruct-details is restricted and you are not in the authorized list. Visit https://huggingface.co/datasets/open-llm-leaderboard/meta-llama__Meta-Llama-3.1-70B-Instruct-details to ask for access. ``` This makes task automation extremely difficult. For example, I'm interested in studying sample-level responses of models on the LLM leaderboard -- how they answered particular questions on a given evaluation framework. As I come across more and more participants that gate their data, it's becoming unwieldy to continue my work (there over 2,000 participants, so in the worst case that's the number of website visits I'd need to manually undertake). One approach is use Selenium to react to the `GatedRepoError`, but that seems like overkill; and a potential violation HF terms of service (?). As mentioned in the previous section, there seems to be an [API for gated dataset owners](https://huggingface.co/docs/hub/en/datasets-gated#via-the-api) to managed access requests, and thus some appetite for allowing automated management of gating. This feature request is to extend that to dataset users. ### Your contribution Whether I can help depends on a few things; one being the complexity of the underlying gated access design. If this feature request is accepted I am open to being involved in discussions and testing, and even development under the right time-outcome tradeoff.
{ "avatar_url": "https://avatars.githubusercontent.com/u/6140840?v=4", "events_url": "https://api.github.com/users/jerome-white/events{/privacy}", "followers_url": "https://api.github.com/users/jerome-white/followers", "following_url": "https://api.github.com/users/jerome-white/following{/other_user}", "gists_url": "https://api.github.com/users/jerome-white/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jerome-white", "id": 6140840, "login": "jerome-white", "node_id": "MDQ6VXNlcjYxNDA4NDA=", "organizations_url": "https://api.github.com/users/jerome-white/orgs", "received_events_url": "https://api.github.com/users/jerome-white/received_events", "repos_url": "https://api.github.com/users/jerome-white/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jerome-white/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerome-white/subscriptions", "type": "User", "url": "https://api.github.com/users/jerome-white", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7364/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7364/timeline
null
not_planned
null
null
4.933333
333
https://api.github.com/repos/huggingface/datasets/issues/7362
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7362/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7362/comments
https://api.github.com/repos/huggingface/datasets/issues/7362/events
https://github.com/huggingface/datasets/issues/7362
2,773,731,829
I_kwDODunzps6lU8n1
7,362
HuggingFace CLI dataset download raises error
{ "avatar_url": "https://avatars.githubusercontent.com/u/3870355?v=4", "events_url": "https://api.github.com/users/ajayvohra2005/events{/privacy}", "followers_url": "https://api.github.com/users/ajayvohra2005/followers", "following_url": "https://api.github.com/users/ajayvohra2005/following{/other_user}", "gists_url": "https://api.github.com/users/ajayvohra2005/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ajayvohra2005", "id": 3870355, "login": "ajayvohra2005", "node_id": "MDQ6VXNlcjM4NzAzNTU=", "organizations_url": "https://api.github.com/users/ajayvohra2005/orgs", "received_events_url": "https://api.github.com/users/ajayvohra2005/received_events", "repos_url": "https://api.github.com/users/ajayvohra2005/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ajayvohra2005/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ajayvohra2005/subscriptions", "type": "User", "url": "https://api.github.com/users/ajayvohra2005", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "I got the same error and was able to resolve it by upgrading from 2.15.0 to 3.2.0.", "> I got the same error and was able to resolve it by upgrading from 2.15.0 to 3.2.0.\r\n\r\nWhat is needed is upgrading `huggingface-hub==0.27.1`. `datasets` does not appear to have anything to do with the error. The upgrade is a workaround, if the workaround works for your use case. Otherwise, this issue breaks all existing Python clients not using some minimum version of `huggingface-hub`. ", "Correct, this has to do with `huggingface_hub`, not `datasets`. Some old versions of `huggingface_hub` are unfortunately not robust to recent changes on HF. Updating `huggingface_hub` fixes the issue :)\r\n\r\nClosing this issue since it's not directly related to `datasets`" ]
2025-01-07T21:03:30Z
2025-01-08T15:00:37Z
2025-01-08T14:35:52Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Trying to download Hugging Face datasets using Hugging Face CLI raises error. This error only started after December 27th, 2024. For example: ``` huggingface-cli download --repo-type dataset gboleda/wikicorpus Traceback (most recent call last): File "/home/ubuntu/test_venv/bin/huggingface-cli", line 8, in <module> sys.exit(main()) File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/commands/huggingface_cli.py", line 51, in main service.run() File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/commands/download.py", line 146, in run print(self._download()) # Print path to downloaded files File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/commands/download.py", line 180, in _download return snapshot_download( File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/_snapshot_download.py", line 164, in snapshot_download repo_info = api.repo_info(repo_id=repo_id, repo_type=repo_type, revision=revision, token=token) File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2491, in repo_info return method( File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2366, in dataset_info return DatasetInfo(**data) File "/home/ubuntu/test_venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 799, in __init__ self.tags = kwargs.pop("tags") KeyError: 'tags' ``` ### Steps to reproduce the bug ``` 1. huggingface-cli download --repo-type dataset gboleda/wikicorpus ``` ### Expected behavior There should be no error. ### Environment info - `datasets` version: 2.19.1 - Platform: Linux-6.8.0-1015-aws-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.5 - PyArrow version: 18.1.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.3.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/7362/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7362/timeline
null
completed
null
null
17.539444
335
https://api.github.com/repos/huggingface/datasets/issues/7356
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7356/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7356/comments
https://api.github.com/repos/huggingface/datasets/issues/7356/events
https://github.com/huggingface/datasets/issues/7356
2,770,095,103
I_kwDODunzps6lHEv_
7,356
How about adding a feature to pass the key when performing map on DatasetDict?
{ "avatar_url": "https://avatars.githubusercontent.com/u/93233241?v=4", "events_url": "https://api.github.com/users/jp1924/events{/privacy}", "followers_url": "https://api.github.com/users/jp1924/followers", "following_url": "https://api.github.com/users/jp1924/following{/other_user}", "gists_url": "https://api.github.com/users/jp1924/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jp1924", "id": 93233241, "login": "jp1924", "node_id": "U_kgDOBY6gWQ", "organizations_url": "https://api.github.com/users/jp1924/orgs", "received_events_url": "https://api.github.com/users/jp1924/received_events", "repos_url": "https://api.github.com/users/jp1924/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jp1924/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jp1924/subscriptions", "type": "User", "url": "https://api.github.com/users/jp1924", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "@lhoestq \r\nIf it's okay with you, can I work on this?", "Hi ! Can you give an example of what it would look like to use this new feature ?\r\n\r\nNote that currently you can already do\r\n\r\n```python\r\nds[\"train\"] = ds[\"train\"].map(process_train)\r\nds[\"test\"] = ds[\"test\"].map(process_test)\r\n```", "@lhoestq \nThanks for the response! \nLet me clarify what I'm looking for with an example:\n\nCurrently, we need to write separate processing functions or call .map() separately:\n```python\n# Current approach\ndef process_train(example):\n # Training-specific processing\n return example\n\ndef process_valid(example):\n # Validation-specific processing\n return example\n\nds[\"train\"] = ds[\"train\"].map(process_train)\nds[\"valid\"] = ds[\"valid\"].map(process_valid)\n```\n\nWhat I'm proposing is to have a single processing function that knows which split it's processing:\n\n```python\n# Proposed feature\ndef process(example, split_key):\n if split_key == \"train\":\n # Training-specific processing\n elif split_key == \"valid\":\n # Validation-specific processing\n return example\n\n# Using with_key=True to pass the split information\nds = ds.map(process, with_key=True)\n```\n\nThis becomes particularly useful when:\n1. The processing logic is heavily shared between splits but needs minor adjustments\n2. You want to maintain the processing logic in one place for better maintainability\n3. The processing function is complex and you want to avoid duplicating code\n\nSo I wanted to request this feature to achieve this kind of functionality. \nI've created a draft PR implementing this: https://github.com/huggingface/datasets/pull/7240/files\n", "I see ! I think it makes sense, and it's more readable than doing something like this:\r\n```python\r\nfrom functools import partial\r\nds = DatasetDict({key: ds[key].map(partial(process, split_key=key)) for key in ds})\r\n```\r\n\r\nPS: you named the argument `with_key` in your example, but it might be even clearer with it's named `with_split` maybe no ?", "@lhoestq I agree. \nIt seems better to use `with_split`.\nSo can I open a PR with this change?", "Sure !" ]
2025-01-06T08:13:52Z
2025-03-24T10:57:47Z
2025-03-24T10:57:47Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request Add a feature to pass the key of the DatasetDict when performing map ### Motivation I often preprocess using map on DatasetDict. Sometimes, I need to preprocess train and valid data differently depending on the task. So, I thought it would be nice to pass the key (like train, valid) when performing map on DatasetDict. What do you think? ### Your contribution I can submit a pull request to add the feature to pass the key of the DatasetDict when performing map.
{ "avatar_url": "https://avatars.githubusercontent.com/u/93233241?v=4", "events_url": "https://api.github.com/users/jp1924/events{/privacy}", "followers_url": "https://api.github.com/users/jp1924/followers", "following_url": "https://api.github.com/users/jp1924/following{/other_user}", "gists_url": "https://api.github.com/users/jp1924/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jp1924", "id": 93233241, "login": "jp1924", "node_id": "U_kgDOBY6gWQ", "organizations_url": "https://api.github.com/users/jp1924/orgs", "received_events_url": "https://api.github.com/users/jp1924/received_events", "repos_url": "https://api.github.com/users/jp1924/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jp1924/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jp1924/subscriptions", "type": "User", "url": "https://api.github.com/users/jp1924", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7356/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7356/timeline
null
completed
null
null
1,850.731944
341
https://api.github.com/repos/huggingface/datasets/issues/7354
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7354/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7354/comments
https://api.github.com/repos/huggingface/datasets/issues/7354/events
https://github.com/huggingface/datasets/issues/7354
2,768,955,917
I_kwDODunzps6lCuoN
7,354
A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.2 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1394644?v=4", "events_url": "https://api.github.com/users/jamessdixon/events{/privacy}", "followers_url": "https://api.github.com/users/jamessdixon/followers", "following_url": "https://api.github.com/users/jamessdixon/following{/other_user}", "gists_url": "https://api.github.com/users/jamessdixon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jamessdixon", "id": 1394644, "login": "jamessdixon", "node_id": "MDQ6VXNlcjEzOTQ2NDQ=", "organizations_url": "https://api.github.com/users/jamessdixon/orgs", "received_events_url": "https://api.github.com/users/jamessdixon/received_events", "repos_url": "https://api.github.com/users/jamessdixon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jamessdixon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jamessdixon/subscriptions", "type": "User", "url": "https://api.github.com/users/jamessdixon", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "recreated .venv and run this: pip install diffusers[training]==0.11.1" ]
2025-01-04T18:30:17Z
2025-01-08T02:20:58Z
2025-01-08T02:20:58Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Following this tutorial: https://huggingface.co/docs/diffusers/en/tutorials/basic_training and running it locally using VSCode on my MacBook. The first line in the tutorial fails: from datasets import load_dataset dataset = load_dataset('huggan/smithsonian_butterflies_subset', split="train"). with this error: A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.2 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'. If you are a user of the module, the easiest solution will be to downgrade to 'numpy<2' or try to upgrade the affected module. We expect that some modules will need time to support NumPy 2. and ImportError: numpy.core.multiarray failed to import. Does from datasets import load_dataset really use NumPy 1.x? ### Steps to reproduce the bug Open VSCode. create a new venv. Create a new ipynb file. Import pip install diffusers[training] try to run this line of code: from datasets import load_dataset ### Expected behavior data is loaded ### Environment info ran this: datasets-cli env and got A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.2 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'. If you are a user of the module, the easiest solution will be to downgrade to 'numpy<2' or try to upgrade the affected module. We expect that some modules will need time to support NumPy 2.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1394644?v=4", "events_url": "https://api.github.com/users/jamessdixon/events{/privacy}", "followers_url": "https://api.github.com/users/jamessdixon/followers", "following_url": "https://api.github.com/users/jamessdixon/following{/other_user}", "gists_url": "https://api.github.com/users/jamessdixon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jamessdixon", "id": 1394644, "login": "jamessdixon", "node_id": "MDQ6VXNlcjEzOTQ2NDQ=", "organizations_url": "https://api.github.com/users/jamessdixon/orgs", "received_events_url": "https://api.github.com/users/jamessdixon/received_events", "repos_url": "https://api.github.com/users/jamessdixon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jamessdixon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jamessdixon/subscriptions", "type": "User", "url": "https://api.github.com/users/jamessdixon", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7354/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7354/timeline
null
completed
null
null
79.844722
343
https://api.github.com/repos/huggingface/datasets/issues/7347
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7347/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7347/comments
https://api.github.com/repos/huggingface/datasets/issues/7347/events
https://github.com/huggingface/datasets/issues/7347
2,760,282,339
I_kwDODunzps6khpDj
7,347
Converting Arrow to WebDataset TAR Format for Offline Use
{ "avatar_url": "https://avatars.githubusercontent.com/u/91370128?v=4", "events_url": "https://api.github.com/users/katie312/events{/privacy}", "followers_url": "https://api.github.com/users/katie312/followers", "following_url": "https://api.github.com/users/katie312/following{/other_user}", "gists_url": "https://api.github.com/users/katie312/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/katie312", "id": 91370128, "login": "katie312", "node_id": "MDQ6VXNlcjkxMzcwMTI4", "organizations_url": "https://api.github.com/users/katie312/orgs", "received_events_url": "https://api.github.com/users/katie312/received_events", "repos_url": "https://api.github.com/users/katie312/repos", "site_admin": false, "starred_url": "https://api.github.com/users/katie312/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/katie312/subscriptions", "type": "User", "url": "https://api.github.com/users/katie312", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\nI've downloaded an Arrow-formatted dataset offline using the hugggingface's datasets library by:\r\n\r\nimport json\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"pixparse/cc3m-wds\")\r\ndataset.save_to_disk(\"./cc3m_1\")\r\n\r\n\r\nnow I need to convert it to WebDataset's TAR format for offline data ingestion.\r\nIs there a straightforward method to achieve this conversion without an internet connection? Can I simply convert it by\r\n\r\ntar -cvf\r\n\r\n\r\nbtw, when I tried:\r\n\r\nimport webdataset as wds\r\nfrom huggingface_hub import get_token\r\nfrom torch.utils.data import DataLoader\r\n\r\nhf_token = get_token()\r\nurl = \"https://huggingface.co/datasets/timm/imagenet-12k-wds/resolve/main/imagenet12k-train-{{0000..1023}}.tar\"\r\nurl = f\"pipe:curl -s -L {url} -H 'Authorization:Bearer {hf_token}'\"\r\ndataset = wds.WebDataset(url).decode()\r\ndataset.save_to_disk(\"./cc3m_webdataset\")\r\n\r\n\r\nerror occured:\r\n\r\nAttributeError: 'WebDataset' object has no attribute 'save_to_disk'\r\n\r\n\r\nThanks a lot!\r\n\r\nMotivation\r\n\r\nConverting Arrow to WebDataset TAR Format\r\n\r\nYour contribution\r\n\r\nNo clue yet\r\n\r\n\r\nاحصل على Outlook لـ iOS<https://aka.ms/o0ukef>\r\n________________________________\r\nمن: katie312 ***@***.***>\r\n‏‏تم الإرسال: Friday, December 27, 2024 4:41:21 AM\r\nإلى: huggingface/datasets ***@***.***>\r\nنسخة: Subscribed ***@***.***>\r\n‏‏الموضوع: [huggingface/datasets] Converting Arrow to WebDataset TAR Format for Offline Use (Issue #7347)\r\n\r\n\r\nFeature request\r\n\r\nHi,\r\n\r\nI've downloaded an Arrow-formatted dataset offline using the hugggingface's datasets library by:\r\n\r\nimport json\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"pixparse/cc3m-wds\")\r\ndataset.save_to_disk(\"./cc3m_1\")\r\n\r\n\r\nnow I need to convert it to WebDataset's TAR format for offline data ingestion.\r\nIs there a straightforward method to achieve this conversion without an internet connection? Can I simply convert it by\r\n\r\ntar -cvf\r\n\r\n\r\nbtw, when I tried:\r\n\r\nimport webdataset as wds\r\nfrom huggingface_hub import get_token\r\nfrom torch.utils.data import DataLoader\r\n\r\nhf_token = get_token()\r\nurl = \"https://huggingface.co/datasets/timm/imagenet-12k-wds/resolve/main/imagenet12k-train-{{0000..1023}}.tar\"\r\nurl = f\"pipe:curl -s -L {url} -H 'Authorization:Bearer {hf_token}'\"\r\ndataset = wds.WebDataset(url).decode()\r\ndataset.save_to_disk(\"./cc3m_webdataset\")\r\n\r\n\r\nerror occured:\r\n\r\nAttributeError: 'WebDataset' object has no attribute 'save_to_disk'\r\n\r\n\r\nThanks a lot!\r\n\r\nMotivation\r\n\r\nConverting Arrow to WebDataset TAR Format\r\n\r\nYour contribution\r\n\r\nNo clue yet\r\n\r\n—\r\nReply to this email directly, view it on GitHub<https://github.com/huggingface/datasets/issues/7347>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AQJDZ2X2RUIIULBJEF5R2HL2HSV4DAVCNFSM6AAAAABUH5QSLCVHI2DSMVQWIX3LMV43ASLTON2WKOZSG43DAMRYGIZTGOI>.\r\nYou are receiving this because you are subscribed to this thread.Message ID: ***@***.***>\r\n", "> now I need to convert it to WebDataset's TAR format for offline data ingestion.\r\n\r\nyou can directly download the .TAR files from HF using e.g. `huggingface-cli download` and load them in webdataset :)", "الفله سنه والطبقه يوم\r\n\r\nاحصل على Outlook لـ iOS<https://aka.ms/o0ukef>\r\n________________________________\r\nمن: Quentin Lhoest ***@***.***>\r\n‏‏تم الإرسال: Friday, December 27, 2024 4:14:43 PM\r\nإلى: huggingface/datasets ***@***.***>\r\nنسخة: hamad350 ***@***.***>; Comment ***@***.***>\r\n‏‏الموضوع: Re: [huggingface/datasets] Converting Arrow to WebDataset TAR Format for Offline Use (Issue #7347)\r\n\r\n\r\nnow I need to convert it to WebDataset's TAR format for offline data ingestion.\r\n\r\nyou can directly download the .TAR files from HF using e.g. huggingface-cli download and load them in webdataset :)\r\n\r\n—\r\nReply to this email directly, view it on GitHub<https://github.com/huggingface/datasets/issues/7347#issuecomment-2563691570>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AQJDZ2R5M3Z7L2MZZYARYID2HVHEHAVCNFSM6AAAAABUH5QSLCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNRTGY4TCNJXGA>.\r\nYou are receiving this because you commented.Message ID: ***@***.***>\r\n", "> > now I need to convert it to WebDataset's TAR format for offline data ingestion.\r\n> \r\n> you can directly download the .TAR files from HF using e.g. `huggingface-cli download` and load them in webdataset :)\r\n\r\nThanks a lot! I completely forgot to use Hugging Face-CLI download. Thanks for the reminding!" ]
2024-12-27T01:40:44Z
2024-12-31T17:38:00Z
2024-12-28T15:38:03Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request Hi, I've downloaded an Arrow-formatted dataset offline using the hugggingface's datasets library by: ``` import json from datasets import load_dataset dataset = load_dataset("pixparse/cc3m-wds") dataset.save_to_disk("./cc3m_1") ``` now I need to convert it to WebDataset's TAR format for offline data ingestion. Is there a straightforward method to achieve this conversion without an internet connection? Can I simply convert it by ``` tar -cvf ``` btw, when I tried: ``` import webdataset as wds from huggingface_hub import get_token from torch.utils.data import DataLoader hf_token = get_token() url = "https://huggingface.co/datasets/timm/imagenet-12k-wds/resolve/main/imagenet12k-train-{{0000..1023}}.tar" url = f"pipe:curl -s -L {url} -H 'Authorization:Bearer {hf_token}'" dataset = wds.WebDataset(url).decode() dataset.save_to_disk("./cc3m_webdataset") ``` error occured: ``` AttributeError: 'WebDataset' object has no attribute 'save_to_disk' ``` Thanks a lot! ### Motivation Converting Arrow to WebDataset TAR Format ### Your contribution No clue yet
{ "avatar_url": "https://avatars.githubusercontent.com/u/91370128?v=4", "events_url": "https://api.github.com/users/katie312/events{/privacy}", "followers_url": "https://api.github.com/users/katie312/followers", "following_url": "https://api.github.com/users/katie312/following{/other_user}", "gists_url": "https://api.github.com/users/katie312/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/katie312", "id": 91370128, "login": "katie312", "node_id": "MDQ6VXNlcjkxMzcwMTI4", "organizations_url": "https://api.github.com/users/katie312/orgs", "received_events_url": "https://api.github.com/users/katie312/received_events", "repos_url": "https://api.github.com/users/katie312/repos", "site_admin": false, "starred_url": "https://api.github.com/users/katie312/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/katie312/subscriptions", "type": "User", "url": "https://api.github.com/users/katie312", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7347/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7347/timeline
null
completed
null
null
37.955278
349
https://api.github.com/repos/huggingface/datasets/issues/7346
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7346/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7346/comments
https://api.github.com/repos/huggingface/datasets/issues/7346/events
https://github.com/huggingface/datasets/issues/7346
2,758,752,118
I_kwDODunzps6kbzd2
7,346
OSError: Invalid flatbuffers message.
{ "avatar_url": "https://avatars.githubusercontent.com/u/46232487?v=4", "events_url": "https://api.github.com/users/antecede/events{/privacy}", "followers_url": "https://api.github.com/users/antecede/followers", "following_url": "https://api.github.com/users/antecede/following{/other_user}", "gists_url": "https://api.github.com/users/antecede/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/antecede", "id": 46232487, "login": "antecede", "node_id": "MDQ6VXNlcjQ2MjMyNDg3", "organizations_url": "https://api.github.com/users/antecede/orgs", "received_events_url": "https://api.github.com/users/antecede/received_events", "repos_url": "https://api.github.com/users/antecede/repos", "site_admin": false, "starred_url": "https://api.github.com/users/antecede/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antecede/subscriptions", "type": "User", "url": "https://api.github.com/users/antecede", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Thanks for reporting, it looks like an issue with `pyarrow.ipc.open_stream`\r\n\r\nCan you try installing `datasets` from this pull request and see if it helps ? https://github.com/huggingface/datasets/pull/7348", "> Thanks for reporting, it looks like an issue with `pyarrow.ipc.open_stream`\r\n> \r\n> Can you try installing `datasets` from this pull request and see if it helps ? #7348\r\n\r\nThank you very much. Here, it also needed to be changed to `except (OSError, pa.lib.ArrowInvalid):`. And then the bug was fixed.\r\nhttps://github.com/huggingface/datasets/blob/2826a040a05e19fca894253b78a932d4fcb4a584/src/datasets/packaged_modules/arrow/arrow.py#L48", "Cool ! we will do a new release soon :) in the meantime you can use `datasets` from `main`" ]
2024-12-25T11:38:52Z
2025-01-09T14:25:29Z
2025-01-09T14:25:05Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When loading a large 2D data (1000 × 1152) with a large number of (2,000 data in this case) in `load_dataset`, the error message `OSError: Invalid flatbuffers message` is reported. When only 300 pieces of data of this size (1000 × 1152) are stored, they can be loaded correctly. When 2,000 2D arrays are stored in each file, about 100 files are generated, each with a file size of about 5-6GB. But when 300 2D arrays are stored in each file, **about 600 files are generated, which is too many files**. ### Steps to reproduce the bug error: ```python --------------------------------------------------------------------------- OSError Traceback (most recent call last) Cell In[2], line 4 1 from datasets import Dataset 2 from datasets import load_dataset ----> 4 real_dataset = load_dataset("arrow", data_files='tensorData/real_ResidueTensor/*', split="train")#.with_format("torch") # , split="train" 5 # sim_dataset = load_dataset("arrow", data_files='tensorData/sim_ResidueTensor/*', split="train").with_format("torch") 6 real_dataset File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/load.py:2151](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/load.py#line=2150), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2148 return builder_instance.as_streaming_dataset(split=split) 2150 # Download and prepare data -> 2151 builder_instance.download_and_prepare( 2152 download_config=download_config, 2153 download_mode=download_mode, 2154 verification_mode=verification_mode, 2155 num_proc=num_proc, 2156 storage_options=storage_options, 2157 ) 2159 # Build dataset for splits 2160 keep_in_memory = ( 2161 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 2162 ) File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py:924](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py#line=923), in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 922 if num_proc is not None: 923 prepare_split_kwargs["num_proc"] = num_proc --> 924 self._download_and_prepare( 925 dl_manager=dl_manager, 926 verification_mode=verification_mode, 927 **prepare_split_kwargs, 928 **download_and_prepare_kwargs, 929 ) 930 # Sync info 931 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py:978](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/builder.py#line=977), in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 976 split_dict = SplitDict(dataset_name=self.dataset_name) 977 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 978 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 980 # Checksums verification 981 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums: File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/packaged_modules/arrow/arrow.py:47](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/datasets/packaged_modules/arrow/arrow.py#line=46), in Arrow._split_generators(self, dl_manager) 45 with open(file, "rb") as f: 46 try: ---> 47 reader = pa.ipc.open_stream(f) 48 except pa.lib.ArrowInvalid: 49 reader = pa.ipc.open_file(f) File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py:190](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py#line=189), in open_stream(source, options, memory_pool) 171 def open_stream(source, *, options=None, memory_pool=None): 172 """ 173 Create reader for Arrow streaming format. 174 (...) 188 A reader for the given source 189 """ --> 190 return RecordBatchStreamReader(source, options=options, 191 memory_pool=memory_pool) File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py:52](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.py#line=51), in RecordBatchStreamReader.__init__(self, source, options, memory_pool) 50 def __init__(self, source, *, options=None, memory_pool=None): 51 options = _ensure_default_ipc_read_options(options) ---> 52 self._open(source, options=options, memory_pool=memory_pool) File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.pxi:1006](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/ipc.pxi#line=1005), in pyarrow.lib._RecordBatchStreamReader._open() File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi:155](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi#line=154), in pyarrow.lib.pyarrow_internal_check_status() File [~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi:92](http://localhost:8899/lab/tree/RTC%3Anew_world/esm3/~/miniforge3/envs/esmIne3/lib/python3.12/site-packages/pyarrow/error.pxi#line=91), in pyarrow.lib.check_status() OSError: Invalid flatbuffers message. ``` reproduce:Here is just an example result, the real 2D matrix is the output of the ESM large model, and the matrix size is approximate ```python import numpy as np import pyarrow as pa random_arrays_list = [np.random.rand(1000, 1152) for _ in range(2000)] table = pa.Table.from_pydict({ 'tensor': [tensor.tolist() for tensor in random_arrays_list] }) import pyarrow.feather as feather feather.write_feather(table, 'test.arrow') from datasets import load_dataset dataset = load_dataset("arrow", data_files='test.arrow', split="train") ``` ### Expected behavior `load_dataset` load the dataset as normal as `feather.read_feather` ```python import pyarrow.feather as feather feather.read_feather('tensorData/real_ResidueTensor/real_tensor_1.arrow') ``` Plus `load_dataset("parquet", data_files='test.arrow', split="train")` works fine ### Environment info - `datasets` version: 3.2.0 - Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.39 - Python version: 3.12.3 - `huggingface_hub` version: 0.26.5 - PyArrow version: 18.1.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7346/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7346/timeline
null
completed
null
null
362.770278
350
https://api.github.com/repos/huggingface/datasets/issues/7345
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7345/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7345/comments
https://api.github.com/repos/huggingface/datasets/issues/7345/events
https://github.com/huggingface/datasets/issues/7345
2,758,585,709
I_kwDODunzps6kbK1t
7,345
Different behaviour of IterableDataset.map vs Dataset.map with remove_columns
{ "avatar_url": "https://avatars.githubusercontent.com/u/12157034?v=4", "events_url": "https://api.github.com/users/vttrifonov/events{/privacy}", "followers_url": "https://api.github.com/users/vttrifonov/followers", "following_url": "https://api.github.com/users/vttrifonov/following{/other_user}", "gists_url": "https://api.github.com/users/vttrifonov/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vttrifonov", "id": 12157034, "login": "vttrifonov", "node_id": "MDQ6VXNlcjEyMTU3MDM0", "organizations_url": "https://api.github.com/users/vttrifonov/orgs", "received_events_url": "https://api.github.com/users/vttrifonov/received_events", "repos_url": "https://api.github.com/users/vttrifonov/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vttrifonov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vttrifonov/subscriptions", "type": "User", "url": "https://api.github.com/users/vttrifonov", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Good catch ! Do you think you can open a PR to fix this issue ?" ]
2024-12-25T07:36:48Z
2025-01-07T11:56:42Z
2025-01-07T11:56:42Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug The following code ```python import datasets as hf ds1 = hf.Dataset.from_list([{'i': i} for i in [0,1]]) #ds1 = ds1.to_iterable_dataset() ds2 = ds1.map( lambda i: {'i': i+1}, input_columns = ['i'], remove_columns = ['i'] ) list(ds2) ``` produces ```python [{'i': 1}, {'i': 2}] ``` as expected. If the line that converts `ds1` to iterable is uncommented so that the `ds2` is a map of an `IterableDataset`, the result is ```python [{},{}] ``` I expected the output to be the same as before. It seems that in the second case the removed column is not added back into the output. The issue seems to be [here](https://github.com/huggingface/datasets/blob/6c6a82a573f946c4a81069f56446caed15cee9c2/src/datasets/iterable_dataset.py#L1093): the columns are removed after the mapping which is not what we want (or what the [documentation says](https://github.com/huggingface/datasets/blob/6c6a82a573f946c4a81069f56446caed15cee9c2/src/datasets/iterable_dataset.py#L2370)) because we want the columns removed from the transformed example but then added if the map produced them. This is `datasets==3.2.0` and `python==3.10` ### Steps to reproduce the bug see above ### Expected behavior see above ### Environment info see above
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7345/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7345/timeline
null
completed
null
null
316.331667
351
https://api.github.com/repos/huggingface/datasets/issues/7344
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7344/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7344/comments
https://api.github.com/repos/huggingface/datasets/issues/7344/events
https://github.com/huggingface/datasets/issues/7344
2,754,735,951
I_kwDODunzps6kMe9P
7,344
HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access SlimPajama-627B or c4 on TPUs
{ "avatar_url": "https://avatars.githubusercontent.com/u/9397233?v=4", "events_url": "https://api.github.com/users/clankur/events{/privacy}", "followers_url": "https://api.github.com/users/clankur/followers", "following_url": "https://api.github.com/users/clankur/following{/other_user}", "gists_url": "https://api.github.com/users/clankur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/clankur", "id": 9397233, "login": "clankur", "node_id": "MDQ6VXNlcjkzOTcyMzM=", "organizations_url": "https://api.github.com/users/clankur/orgs", "received_events_url": "https://api.github.com/users/clankur/received_events", "repos_url": "https://api.github.com/users/clankur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/clankur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clankur/subscriptions", "type": "User", "url": "https://api.github.com/users/clankur", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! This is due to your old version of `datasets` which calls HF with `expand=True`, an option that is strongly rate limited.\r\n\r\nRecent versions of `datasets` don't rely on this anymore, you can fix your issue by upgrading `datasets` :)\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\nYou can also get maximum HF availability on your compute nodes with HF Enterprise (see [network security features](https://huggingface.co/docs/hub/enterprise-hub-network-security))", "Upgrading fixed the issue for me. Thanks! " ]
2024-12-22T16:30:07Z
2025-01-15T05:32:00Z
2025-01-15T05:31:58Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug I am trying to run some trainings on Google's TPUs using Huggingface's DataLoader on [SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B) and [c4](https://huggingface.co/datasets/allenai/c4), but I end up running into `429 Client Error: Too Many Requests for URL` error when I call `load_dataset`. The even odder part is that I am able to sucessfully run trainings with the [wikitext dataset](https://huggingface.co/datasets/Salesforce/wikitext). Is there something I need to setup to specifically train with SlimPajama or C4 with TPUs because I am not clear why I am getting these errors. ### Steps to reproduce the bug These are the commands you could run to produce the error below but you will require a ClearML account (you can create one [here](https://app.clear.ml/login?redirect=%2Fdashboard)) with a queue setup to run on Google TPUs ```bash git clone https://github.com/clankur/muGPT.git cd muGPT python -m train --config-name=slim_v4-32_84m.yaml +training.queue={NAME_OF_CLEARML_QUEUE} ``` The error I see: ``` Traceback (most recent call last): File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/clearml/binding/hydra_bind.py", line 230, in _patched_task_function return task_function(a_config, *a_args, **a_kwargs) File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/train.py", line 1037, in main main_contained(config, logger) File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/train.py", line 840, in main_contained loader = get_loader("train", config.training_data, config.training.tokens) File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/input_loader.py", line 549, in get_loader return HuggingFaceDataLoader(split, config, token_batch_params) File "/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/input_loader.py", line 395, in __init__ self.dataset = load_dataset( File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 2112, in load_dataset builder_instance = load_dataset_builder( File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1798, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1495, in dataset_module_factory raise e1 from None File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1479, in dataset_module_factory ).get_module() File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py", line 1034, in get_module else get_data_patterns(base_path, download_config=self.download_config) File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/data_files.py", line 457, in get_data_patterns return _get_data_files_patterns(resolver) File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/data_files.py", line 248, in _get_data_files_patterns data_files = pattern_resolver(pattern) File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/data_files.py", line 340, in resolve_pattern for filepath, info in fs.glob(pattern, detail=True).items() File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 409, in glob return super().glob(path, **kwargs) File "/home/clankur/.clearml/venvs-builds/3.10/lib/python3.10/site-packages/fsspec/spec.py", line 602, in glob allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs) File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 429, in find out = self._ls_tree(path, recursive=True, refresh=refresh, revision=resolved_path.revision, **kwargs) File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 358, in _ls_tree self._ls_tree( File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 375, in _ls_tree for path_info in tree: File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 3080, in list_repo_tree for path_info in paginate(path=tree_url, headers=headers, params={"recursive": recursive, "expand": expand}): File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/utils/_pagination.py", line 46, in paginate hf_raise_for_status(r) File "/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 477, in hf_raise_for_status raise _format(HfHubHTTPError, str(e), response) from e huggingface_hub.errors.HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/api/datasets/cerebras/SlimPajama-627B/tree/2d0accdd58c5d5511943ca1f5ff0e3eb5e293543?recursive=True&expand=True&cursor=ZXlKbWFXeGxYMjVoYldVaU9pSjBaWE4wTDJOb2RXNXJNUzlsZUdGdGNHeGxYMmh2YkdSdmRYUmZPVFEzTG1wemIyNXNMbnB6ZENKOTo2MjUw (Request ID: Root=1-67673de9-1413900606ede7712b08ef2c;1304c09c-3e69-4222-be14-f10ee709d49c) maximum queue size reached Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace. ``` ### Expected behavior I'd expect the DataLoader to load from the SlimPajama-627B and c4 dataset without issue. ### Environment info - `datasets` version: 2.14.4 - Platform: Linux-5.8.0-1035-gcp-x86_64-with-glibc2.31 - Python version: 3.10.16 - Huggingface_hub version: 0.26.5 - PyArrow version: 18.1.0 - Pandas version: 2.2.3
{ "avatar_url": "https://avatars.githubusercontent.com/u/9397233?v=4", "events_url": "https://api.github.com/users/clankur/events{/privacy}", "followers_url": "https://api.github.com/users/clankur/followers", "following_url": "https://api.github.com/users/clankur/following{/other_user}", "gists_url": "https://api.github.com/users/clankur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/clankur", "id": 9397233, "login": "clankur", "node_id": "MDQ6VXNlcjkzOTcyMzM=", "organizations_url": "https://api.github.com/users/clankur/orgs", "received_events_url": "https://api.github.com/users/clankur/received_events", "repos_url": "https://api.github.com/users/clankur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/clankur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clankur/subscriptions", "type": "User", "url": "https://api.github.com/users/clankur", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7344/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7344/timeline
null
completed
null
null
565.030833
352
https://api.github.com/repos/huggingface/datasets/issues/7343
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7343/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7343/comments
https://api.github.com/repos/huggingface/datasets/issues/7343/events
https://github.com/huggingface/datasets/issues/7343
2,750,525,823
I_kwDODunzps6j8bF_
7,343
[Bug] Inconsistent behavior of data_files and data_dir in load_dataset method.
{ "avatar_url": "https://avatars.githubusercontent.com/u/74161960?v=4", "events_url": "https://api.github.com/users/JasonCZH4/events{/privacy}", "followers_url": "https://api.github.com/users/JasonCZH4/followers", "following_url": "https://api.github.com/users/JasonCZH4/following{/other_user}", "gists_url": "https://api.github.com/users/JasonCZH4/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JasonCZH4", "id": 74161960, "login": "JasonCZH4", "node_id": "MDQ6VXNlcjc0MTYxOTYw", "organizations_url": "https://api.github.com/users/JasonCZH4/orgs", "received_events_url": "https://api.github.com/users/JasonCZH4/received_events", "repos_url": "https://api.github.com/users/JasonCZH4/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JasonCZH4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JasonCZH4/subscriptions", "type": "User", "url": "https://api.github.com/users/JasonCZH4", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! `data_files` with a list is equivalent to `data_files={\"train\": data_files}` with a train test only.\r\n\r\nWhen no split are specified, they are inferred based on file names, and files with no apparent split are ignored", "Thanks for your reply!\r\n`files with no apparent split are ignored`. Is there a option that I can choose to ignored it or not as I mention aboved? Thanks!", "To include all the files, the best way is to pass `data_files` yourself. There is no option to disable split detection at the moment", "Thanks! I hope you guys can consider adding this option in the future. :)" ]
2024-12-19T14:31:27Z
2025-01-03T15:54:09Z
2025-01-03T15:54:09Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Inconsistent operation of data_files and data_dir in load_dataset method. ### Steps to reproduce the bug # First I have three files, named 'train.json', 'val.json', 'test.json'. Each one has a simple dict `{text:'aaa'}`. Their path are `/data/train.json`, `/data/val.json`, `/data/test.json` I load dataset with `data_files` argument: ```py files = [os.path.join('./data',file) for file in os.listdir('./data')] ds = load_dataset( path='json', data_files=files,) ``` And I get: ```py DatasetDict({ train: Dataset({ features: ['text'], num_rows: 3 }) }) ``` However, If I load dataset with `data_dir` argument: ```py ds = load_dataset( path='json', data_dir='./data',) ``` And I get: ```py DatasetDict({ train: Dataset({ features: ['text'], num_rows: 1 }) validation: Dataset({ features: ['text'], num_rows: 1 }) test: Dataset({ features: ['text'], num_rows: 1 }) }) ``` Two results are not the same. Their behaviors are not equal, even if the statement [here](https://github.com/huggingface/datasets/blob/d0c152a979d91cc34b605c0298aebc650ab7dd27/src/datasets/load.py#L1790) said that their behaviors are equal. # Second If some filename include 'test' while others do not, `load_dataset` only return `test` dataset and others files are **abandoned**. Given two files named `test.json` and `1.json` Each one has a simple dict `{text:'aaa'}`. I load the dataset using: ```py ds = load_dataset( path='json', data_dir='./data',) ``` Only `test` is returned, `1.json` is missing: ```py DatasetDict({ test: Dataset({ features: ['text'], num_rows: 1 }) }) ``` Things do not change even I manually set `split='train'` ### Expected behavior 1. Fix the above bugs. 2. Although the document says that load_dataset method will `Find which file goes into which split (e.g. train/test) based on file and directory names or on the YAML configuration`, I hope I can manually decide whether to do so. Sometimes users may accidentally put a `test` string in the filename but they just want a single `train` dataset. If the number of files in `data_dir` is huge, it's not easy to find out what cause the second situation metioned above. ### Environment info datasets==3.2.0 Ubuntu18.84
{ "avatar_url": "https://avatars.githubusercontent.com/u/74161960?v=4", "events_url": "https://api.github.com/users/JasonCZH4/events{/privacy}", "followers_url": "https://api.github.com/users/JasonCZH4/followers", "following_url": "https://api.github.com/users/JasonCZH4/following{/other_user}", "gists_url": "https://api.github.com/users/JasonCZH4/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JasonCZH4", "id": 74161960, "login": "JasonCZH4", "node_id": "MDQ6VXNlcjc0MTYxOTYw", "organizations_url": "https://api.github.com/users/JasonCZH4/orgs", "received_events_url": "https://api.github.com/users/JasonCZH4/received_events", "repos_url": "https://api.github.com/users/JasonCZH4/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JasonCZH4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JasonCZH4/subscriptions", "type": "User", "url": "https://api.github.com/users/JasonCZH4", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7343/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7343/timeline
null
completed
null
null
361.378333
353
https://api.github.com/repos/huggingface/datasets/issues/7323
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7323/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7323/comments
https://api.github.com/repos/huggingface/datasets/issues/7323/events
https://github.com/huggingface/datasets/issues/7323
2,736,008,698
I_kwDODunzps6jFC36
7,323
Unexpected cache behaviour using load_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/74349080?v=4", "events_url": "https://api.github.com/users/Moritz-Wirth/events{/privacy}", "followers_url": "https://api.github.com/users/Moritz-Wirth/followers", "following_url": "https://api.github.com/users/Moritz-Wirth/following{/other_user}", "gists_url": "https://api.github.com/users/Moritz-Wirth/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Moritz-Wirth", "id": 74349080, "login": "Moritz-Wirth", "node_id": "MDQ6VXNlcjc0MzQ5MDgw", "organizations_url": "https://api.github.com/users/Moritz-Wirth/orgs", "received_events_url": "https://api.github.com/users/Moritz-Wirth/received_events", "repos_url": "https://api.github.com/users/Moritz-Wirth/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Moritz-Wirth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Moritz-Wirth/subscriptions", "type": "User", "url": "https://api.github.com/users/Moritz-Wirth", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! Since `datasets` 3.x, the `datasets` specific files are in `cache_dir=` and the HF files are cached using `huggingface_hub` and you can set its cache directory using the `HF_HOME` environment variable.\r\n\r\nThey are independent, for example you can delete the Hub cache (containing downloaded files) but still reload your cached datasets from the `datasets` cache (containing prepared datasets in Arrow format)" ]
2024-12-12T14:03:00Z
2025-01-31T11:34:24Z
2025-01-31T11:34:24Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Following the (Cache management)[https://huggingface.co/docs/datasets/en/cache] docu and previous behaviour from datasets version 2.18.0, one is able to change the cache directory. Previously, all downloaded/extracted/etc files were found in this folder. As i have recently update to the latest version this is not the case anymore. Downloaded files are stored in `~/.cache/huggingface/hub`. Providing the `cache_dir` argument in `load_dataset` the cache directory is created and there are some files but the bulk is still in `~/.cache/huggingface/hub`. I believe this could be solved by adding the cache_dir argument [here](https://github.com/huggingface/datasets/blob/fdda5585ab18ea1292547f36c969d12c408ab842/src/datasets/utils/file_utils.py#L188) ### Steps to reproduce the bug For example using https://huggingface.co/datasets/ashraq/esc50: ```python from datasets import load_dataset ds = load_dataset("ashraq/esc50", "default", cache_dir="~/custom/cache/path/esc50") ``` ### Expected behavior I would expect the bulk of files related to the dataset to be stored somewhere in `~/custom/cache/path/esc50`, but it seems they are in `~/.cache/huggingface/hub/datasets--ashraq--esc50`. ### Environment info - `datasets` version: 3.2.0 - Platform: Linux-5.14.0-503.15.1.el9_5.x86_64-x86_64-with-glibc2.34 - Python version: 3.10.14 - `huggingface_hub` version: 0.26.5 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.6.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/74349080?v=4", "events_url": "https://api.github.com/users/Moritz-Wirth/events{/privacy}", "followers_url": "https://api.github.com/users/Moritz-Wirth/followers", "following_url": "https://api.github.com/users/Moritz-Wirth/following{/other_user}", "gists_url": "https://api.github.com/users/Moritz-Wirth/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Moritz-Wirth", "id": 74349080, "login": "Moritz-Wirth", "node_id": "MDQ6VXNlcjc0MzQ5MDgw", "organizations_url": "https://api.github.com/users/Moritz-Wirth/orgs", "received_events_url": "https://api.github.com/users/Moritz-Wirth/received_events", "repos_url": "https://api.github.com/users/Moritz-Wirth/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Moritz-Wirth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Moritz-Wirth/subscriptions", "type": "User", "url": "https://api.github.com/users/Moritz-Wirth", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7323/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7323/timeline
null
completed
null
null
1,197.523333
366
https://api.github.com/repos/huggingface/datasets/issues/7320
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7320/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7320/comments
https://api.github.com/repos/huggingface/datasets/issues/7320/events
https://github.com/huggingface/datasets/issues/7320
2,731,112,100
I_kwDODunzps6iyXak
7,320
ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label']
{ "avatar_url": "https://avatars.githubusercontent.com/u/38381084?v=4", "events_url": "https://api.github.com/users/atrompeterog/events{/privacy}", "followers_url": "https://api.github.com/users/atrompeterog/followers", "following_url": "https://api.github.com/users/atrompeterog/following{/other_user}", "gists_url": "https://api.github.com/users/atrompeterog/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/atrompeterog", "id": 38381084, "login": "atrompeterog", "node_id": "MDQ6VXNlcjM4MzgxMDg0", "organizations_url": "https://api.github.com/users/atrompeterog/orgs", "received_events_url": "https://api.github.com/users/atrompeterog/received_events", "repos_url": "https://api.github.com/users/atrompeterog/repos", "site_admin": false, "starred_url": "https://api.github.com/users/atrompeterog/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/atrompeterog/subscriptions", "type": "User", "url": "https://api.github.com/users/atrompeterog", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Now i have other error" ]
2024-12-10T20:23:11Z
2024-12-10T23:22:23Z
2024-12-10T23:22:23Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug I am trying to create a PEFT model from DISTILBERT model, and run a training loop. However, the trainer.train() is giving me this error: ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label'] Here is my code: ### Steps to reproduce the bug #Creating a PEFT Config from peft import LoraConfig from transformers import AutoTokenizer, AutoModelForSequenceClassification from peft import get_peft_model lora_config = LoraConfig( task_type="SEQ_CLASS", r=8, lora_alpha=32, target_modules=["q_lin", "k_lin", "v_lin"], lora_dropout=0.01, ) #Converting a Transformers Model into a PEFT Model model = AutoModelForSequenceClassification.from_pretrained( "distilbert-base-uncased", num_labels=2, #Binary classification, 1 = positive, 0 = negative ) lora_model = get_peft_model(model, lora_config) print(lora_model) Tokenize data set from datasets import load_dataset from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") # Load the train and test splits dataset dataset = load_dataset("fancyzhx/amazon_polarity") #create a smaller subset for train and test subset_size = 5000 small_train_dataset = dataset["train"].shuffle(seed=42).select(range(subset_size)) small_test_dataset = dataset["test"].shuffle(seed=42).select(range(subset_size)) #Tokenize data def tokenize_function(example): return tokenizer(example["content"], padding="max_length", truncation=True) tokenized_train_dataset = small_train_dataset.map(tokenize_function, batched=True) tokenized_test_dataset = small_test_dataset.map(tokenize_function, batched=True) train_lora = tokenized_train_dataset.rename_column('label', 'labels') test_lora = tokenized_test_dataset.rename_column('label', 'labels') print(tokenized_train_dataset.column_names) print(tokenized_test_dataset.column_names) #Train the PEFT model import numpy as np from transformers import Trainer, TrainingArguments, default_data_collator, DataCollatorWithPadding from datasets import load_dataset from transformers import AutoTokenizer, AutoModelForSequenceClassification def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return {"accuracy": (predictions == labels).mean()} trainer = Trainer( model=lora_model, args=TrainingArguments( output_dir=".", learning_rate=2e-3, # Reduce the batch size if you don't have enough memory per_device_train_batch_size=1, per_device_eval_batch_size=1, num_train_epochs=3, weight_decay=0.01, evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, ), train_dataset=tokenized_train_dataset, eval_dataset=tokenized_test_dataset, tokenizer=tokenizer, data_collator=DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="pt"), compute_metrics=compute_metrics, ) trainer.train() ### Expected behavior Example of output: [558/558 01:04, Epoch XX] Epoch | Training Loss | Validation Loss | Accuracy -- | -- | -- | -- 1 | No log | 0.046478 | 0.988341 2 | 0.052800 | 0.048840 | 0.988341 ### Environment info Using python and jupyter notbook
{ "avatar_url": "https://avatars.githubusercontent.com/u/38381084?v=4", "events_url": "https://api.github.com/users/atrompeterog/events{/privacy}", "followers_url": "https://api.github.com/users/atrompeterog/followers", "following_url": "https://api.github.com/users/atrompeterog/following{/other_user}", "gists_url": "https://api.github.com/users/atrompeterog/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/atrompeterog", "id": 38381084, "login": "atrompeterog", "node_id": "MDQ6VXNlcjM4MzgxMDg0", "organizations_url": "https://api.github.com/users/atrompeterog/orgs", "received_events_url": "https://api.github.com/users/atrompeterog/received_events", "repos_url": "https://api.github.com/users/atrompeterog/repos", "site_admin": false, "starred_url": "https://api.github.com/users/atrompeterog/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/atrompeterog/subscriptions", "type": "User", "url": "https://api.github.com/users/atrompeterog", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7320/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7320/timeline
null
completed
null
null
2.986667
369
https://api.github.com/repos/huggingface/datasets/issues/7303
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7303/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7303/comments
https://api.github.com/repos/huggingface/datasets/issues/7303/events
https://github.com/huggingface/datasets/issues/7303
2,705,729,696
I_kwDODunzps6hRiig
7,303
DataFilesNotFoundError for datasets LM1B
{ "avatar_url": "https://avatars.githubusercontent.com/u/72264324?v=4", "events_url": "https://api.github.com/users/hml1996-fight/events{/privacy}", "followers_url": "https://api.github.com/users/hml1996-fight/followers", "following_url": "https://api.github.com/users/hml1996-fight/following{/other_user}", "gists_url": "https://api.github.com/users/hml1996-fight/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hml1996-fight", "id": 72264324, "login": "hml1996-fight", "node_id": "MDQ6VXNlcjcyMjY0MzI0", "organizations_url": "https://api.github.com/users/hml1996-fight/orgs", "received_events_url": "https://api.github.com/users/hml1996-fight/received_events", "repos_url": "https://api.github.com/users/hml1996-fight/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hml1996-fight/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hml1996-fight/subscriptions", "type": "User", "url": "https://api.github.com/users/hml1996-fight", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! Can you try with a more recent version of `datasets` ? Also you might need to pass trust_remote_code=True since it's a script based dataset" ]
2024-11-29T17:27:45Z
2024-12-11T13:22:47Z
2024-12-11T13:22:47Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Cannot load the dataset https://huggingface.co/datasets/billion-word-benchmark/lm1b ### Steps to reproduce the bug `dataset = datasets.load_dataset('lm1b', split=split)` ### Expected behavior `Traceback (most recent call last): File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/word_freq.py", line 13, in <module> train_data = DiffusionLoader(tokenizer=tokenizer).my_load(task_name='lm1b', splits=['train'])[0] File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 20, in my_load return [self._load(task_name, name) for name in splits] File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 20, in <listcomp> return [self._load(task_name, name) for name in splits] File "/home/hml/projects/DeepLearning/Generative_model/Diffusion-BERT/dataloader.py", line 13, in _load dataset = datasets.load_dataset('lm1b', split=split) File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 2594, in load_dataset builder_instance = load_dataset_builder( File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 2266, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 1827, in dataset_module_factory ).get_module() File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 1040, in get_module module_name, default_builder_kwargs = infer_module_for_data_files( File "/home/hml/.conda/envs/DB/lib/python3.10/site-packages/datasets/load.py", line 598, in infer_module_for_data_files raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else "")) datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in lm1b` ### Environment info datasets: 2.20.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/72264324?v=4", "events_url": "https://api.github.com/users/hml1996-fight/events{/privacy}", "followers_url": "https://api.github.com/users/hml1996-fight/followers", "following_url": "https://api.github.com/users/hml1996-fight/following{/other_user}", "gists_url": "https://api.github.com/users/hml1996-fight/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hml1996-fight", "id": 72264324, "login": "hml1996-fight", "node_id": "MDQ6VXNlcjcyMjY0MzI0", "organizations_url": "https://api.github.com/users/hml1996-fight/orgs", "received_events_url": "https://api.github.com/users/hml1996-fight/received_events", "repos_url": "https://api.github.com/users/hml1996-fight/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hml1996-fight/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hml1996-fight/subscriptions", "type": "User", "url": "https://api.github.com/users/hml1996-fight", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7303/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7303/timeline
null
completed
null
null
283.917222
385
https://api.github.com/repos/huggingface/datasets/issues/7297
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7297/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7297/comments
https://api.github.com/repos/huggingface/datasets/issues/7297/events
https://github.com/huggingface/datasets/issues/7297
2,683,977,430
I_kwDODunzps6f-j7W
7,297
wrong return type for `IterableDataset.shard()`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47225236?v=4", "events_url": "https://api.github.com/users/ysngshn/events{/privacy}", "followers_url": "https://api.github.com/users/ysngshn/followers", "following_url": "https://api.github.com/users/ysngshn/following{/other_user}", "gists_url": "https://api.github.com/users/ysngshn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ysngshn", "id": 47225236, "login": "ysngshn", "node_id": "MDQ6VXNlcjQ3MjI1MjM2", "organizations_url": "https://api.github.com/users/ysngshn/orgs", "received_events_url": "https://api.github.com/users/ysngshn/received_events", "repos_url": "https://api.github.com/users/ysngshn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ysngshn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ysngshn/subscriptions", "type": "User", "url": "https://api.github.com/users/ysngshn", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Oops my bad ! thanks for reporting" ]
2024-11-22T17:25:46Z
2024-12-03T14:27:27Z
2024-12-03T14:27:03Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug `IterableDataset.shard()` has the wrong typing for its return as `"Dataset"`. It should be `"IterableDataset"`. Makes my IDE unhappy. ### Steps to reproduce the bug look at [the source code](https://github.com/huggingface/datasets/blob/main/src/datasets/iterable_dataset.py#L2668)? ### Expected behavior Correct return type as `"IterableDataset"` ### Environment info datasets==3.1.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7297/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7297/timeline
null
completed
null
null
261.021389
391
https://api.github.com/repos/huggingface/datasets/issues/7292
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7292/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7292/comments
https://api.github.com/repos/huggingface/datasets/issues/7292/events
https://github.com/huggingface/datasets/issues/7292
2,664,250,855
I_kwDODunzps6ezT3n
7,292
DataFilesNotFoundError for datasets `OpenMol/PubChemSFT`
{ "avatar_url": "https://avatars.githubusercontent.com/u/17878022?v=4", "events_url": "https://api.github.com/users/xnuohz/events{/privacy}", "followers_url": "https://api.github.com/users/xnuohz/followers", "following_url": "https://api.github.com/users/xnuohz/following{/other_user}", "gists_url": "https://api.github.com/users/xnuohz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xnuohz", "id": 17878022, "login": "xnuohz", "node_id": "MDQ6VXNlcjE3ODc4MDIy", "organizations_url": "https://api.github.com/users/xnuohz/orgs", "received_events_url": "https://api.github.com/users/xnuohz/received_events", "repos_url": "https://api.github.com/users/xnuohz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xnuohz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xnuohz/subscriptions", "type": "User", "url": "https://api.github.com/users/xnuohz", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! If the dataset owner uses `push_to_hub()` instead of `save_to_disk()` and upload the local files it will fix the issue.\r\nRight now `datasets` sees the train/test/valid pickle files but they are not supported file formats.", "Alternatively you can load the arrow file instead:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('OpenMol/PubChemSFT', data_files='stage1/*.arrow')\r\n```", "Thanks! I'll have a try." ]
2024-11-16T11:54:31Z
2024-11-19T00:53:00Z
2024-11-19T00:52:59Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Cannot load the dataset https://huggingface.co/datasets/OpenMol/PubChemSFT ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('OpenMol/PubChemSFT') ``` ### Expected behavior ``` --------------------------------------------------------------------------- DataFilesNotFoundError Traceback (most recent call last) Cell In[7], [line 2](vscode-notebook-cell:?execution_count=7&line=2) [1](vscode-notebook-cell:?execution_count=7&line=1) from datasets import load_dataset ----> [2](vscode-notebook-cell:?execution_count=7&line=2) dataset = load_dataset('OpenMol/PubChemSFT') File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2587, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) [2582](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2582) verification_mode = VerificationMode( [2583](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2583) (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS [2584](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2584) ) [2586](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2586) # Create a dataset builder -> [2587](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2587) builder_instance = load_dataset_builder( [2588](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2588) path=path, [2589](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2589) name=name, [2590](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2590) data_dir=data_dir, [2591](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2591) data_files=data_files, [2592](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2592) cache_dir=cache_dir, [2593](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2593) features=features, [2594](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2594) download_config=download_config, [2595](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2595) download_mode=download_mode, [2596](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2596) revision=revision, [2597](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2597) token=token, [2598](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2598) storage_options=storage_options, [2599](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2599) trust_remote_code=trust_remote_code, [2600](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2600) _require_default_config_name=name is None, [2601](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2601) **config_kwargs, [2602](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2602) ) [2604](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2604) # Return iterable dataset in case of streaming [2605](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2605) if streaming: File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2259, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs) [2257](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2257) download_config = download_config.copy() if download_config else DownloadConfig() [2258](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2258) download_config.storage_options.update(storage_options) -> [2259](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2259) dataset_module = dataset_module_factory( [2260](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2260) path, [2261](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2261) revision=revision, [2262](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2262) download_config=download_config, [2263](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2263) download_mode=download_mode, [2264](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2264) data_dir=data_dir, [2265](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2265) data_files=data_files, [2266](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2266) cache_dir=cache_dir, [2267](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2267) trust_remote_code=trust_remote_code, [2268](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2268) _require_default_config_name=_require_default_config_name, [2269](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2269) _require_custom_configs=bool(config_kwargs), [2270](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2270) ) [2271](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2271) # Get dataset builder class from the processing script [2272](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2272) builder_kwargs = dataset_module.builder_kwargs File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1904, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) [1902](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1902) raise ConnectionError(f"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}") from None [1903](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1903) if isinstance(e1, (DataFilesNotFoundError, DatasetNotFoundError, EmptyDatasetError)): -> [1904](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1904) raise e1 from None [1905](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1905) if isinstance(e1, FileNotFoundError): [1906](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1906) raise FileNotFoundError( [1907](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1907) f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " [1908](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1908) f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" [1909](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1909) ) from None File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1885, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) [1876](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1876) return HubDatasetModuleFactoryWithScript( [1877](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1877) path, [1878](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1878) revision=revision, (...) [1882](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1882) trust_remote_code=trust_remote_code, [1883](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1883) ).get_module() [1884](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1884) else: -> [1885](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1885) return HubDatasetModuleFactoryWithoutScript( [1886](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1886) path, [1887](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1887) revision=revision, [1888](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1888) data_dir=data_dir, [1889](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1889) data_files=data_files, [1890](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1890) download_config=download_config, [1891](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1891) download_mode=download_mode, [1892](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1892) ).get_module() [1893](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1893) except Exception as e1: [1894](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1894) # All the attempts failed, before raising the error we should check if the module is already cached [1895](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1895) try: File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1270, in HubDatasetModuleFactoryWithoutScript.get_module(self) [1263](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1263) patterns = get_data_patterns(base_path, download_config=self.download_config) [1264](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1264) data_files = DataFilesDict.from_patterns( [1265](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1265) patterns, [1266](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1266) base_path=base_path, [1267](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1267) allowed_extensions=ALL_ALLOWED_EXTENSIONS, [1268](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1268) download_config=self.download_config, [1269](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1269) ) -> [1270](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1270) module_name, default_builder_kwargs = infer_module_for_data_files( [1271](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1271) data_files=data_files, [1272](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1272) path=self.name, [1273](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1273) download_config=self.download_config, [1274](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1274) ) [1275](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1275) data_files = data_files.filter_extensions(_MODULE_TO_EXTENSIONS[module_name]) [1276](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1276) # Collect metadata files if the module supports them File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:597, in infer_module_for_data_files(data_files, path, download_config) [595](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:595) raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") [596](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:596) if not module_name: --> [597](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:597) raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else "")) [598](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:598) return module_name, default_builder_kwargs DataFilesNotFoundError: No (supported) data files found in OpenMol/PubChemSFT ``` ### Environment info ``` - `datasets` version: 3.1.0 - Platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.31 - Python version: 3.9.18 - `huggingface_hub` version: 0.25.2 - PyArrow version: 18.0.0 - Pandas version: 2.0.3 - `fsspec` version: 2023.9.2 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/17878022?v=4", "events_url": "https://api.github.com/users/xnuohz/events{/privacy}", "followers_url": "https://api.github.com/users/xnuohz/followers", "following_url": "https://api.github.com/users/xnuohz/following{/other_user}", "gists_url": "https://api.github.com/users/xnuohz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xnuohz", "id": 17878022, "login": "xnuohz", "node_id": "MDQ6VXNlcjE3ODc4MDIy", "organizations_url": "https://api.github.com/users/xnuohz/orgs", "received_events_url": "https://api.github.com/users/xnuohz/received_events", "repos_url": "https://api.github.com/users/xnuohz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xnuohz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xnuohz/subscriptions", "type": "User", "url": "https://api.github.com/users/xnuohz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7292/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7292/timeline
null
completed
null
null
60.974444
396
https://api.github.com/repos/huggingface/datasets/issues/7289
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7289/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7289/comments
https://api.github.com/repos/huggingface/datasets/issues/7289/events
https://github.com/huggingface/datasets/issues/7289
2,648,019,507
I_kwDODunzps6d1ZIz
7,289
Dataset viewer displays wrong statists
{ "avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4", "events_url": "https://api.github.com/users/speedcell4/events{/privacy}", "followers_url": "https://api.github.com/users/speedcell4/followers", "following_url": "https://api.github.com/users/speedcell4/following{/other_user}", "gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/speedcell4", "id": 3585459, "login": "speedcell4", "node_id": "MDQ6VXNlcjM1ODU0NTk=", "organizations_url": "https://api.github.com/users/speedcell4/orgs", "received_events_url": "https://api.github.com/users/speedcell4/received_events", "repos_url": "https://api.github.com/users/speedcell4/repos", "site_admin": false, "starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions", "type": "User", "url": "https://api.github.com/users/speedcell4", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "i think this issue is more for https://github.com/huggingface/dataset-viewer" ]
2024-11-11T03:29:27Z
2024-11-13T13:02:25Z
2024-11-13T13:02:25Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug In [my dataset](https://huggingface.co/datasets/speedcell4/opus-unigram2), there is a column called `lang2`, and there are 94 different classes in total, but the viewer says there are 83 values only. This issue only arises in the `train` split. The total number of values is also 94 in the `test` and `dev` columns, viewer tells the correct number of them. <img width="177" alt="image" src="https://github.com/user-attachments/assets/78d76ef2-fe0e-4fa3-85e0-fb2552813d1c"> ### Steps to reproduce the bug ```python3 from datasets import load_dataset ds = load_dataset('speedcell4/opus-unigram2').unique('lang2') for key, lang2 in ds.items(): print(key, len(lang2)) ``` This script returns the following and tells that the `train` split has 94 values in the `lang2` column. ``` train 94 dev 94 test 94 zero 5 ``` ### Expected behavior 94 in the reviewer. ### Environment info Collecting environment information... PyTorch version: 2.4.1+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: CentOS Linux release 8.2.2004 (Core) (x86_64) GCC version: (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5) Clang version: Could not collect CMake version: version 3.11.4 Libc version: glibc-2.28 Python version: 3.9.20 (main, Oct 3 2024, 07:27:41) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.28 Is CUDA available: True CUDA runtime version: 12.2.140 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB GPU 1: NVIDIA A100-SXM4-40GB GPU 2: NVIDIA A100-SXM4-40GB GPU 3: NVIDIA A100-SXM4-40GB GPU 4: NVIDIA A100-SXM4-40GB GPU 5: NVIDIA A100-SXM4-40GB GPU 6: NVIDIA A100-SXM4-40GB GPU 7: NVIDIA A100-SXM4-40GB Nvidia driver version: 525.85.05 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 1 Core(s) per socket: 32 Socket(s): 2 NUMA node(s): 4 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7542 32-Core Processor Stepping: 0 CPU MHz: 3389.114 BogoMIPS: 5789.40 Virtualization: AMD-V L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-15 NUMA node1 CPU(s): 16-31 NUMA node2 CPU(s): 32-47 NUMA node3 CPU(s): 48-63 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] torch==2.4.1+cu121 [pip3] torchaudio==2.4.1+cu121 [pip3] torchdevice==0.1.1 [pip3] torchglyph==0.3.2 [pip3] torchmetrics==1.5.0 [pip3] torchrua==0.5.1 [pip3] torchvision==0.19.1+cu121 [pip3] triton==3.0.0 [pip3] datasets==3.0.1 [conda] numpy 1.26.4 pypi_0 pypi [conda] torch 2.4.1+cu121 pypi_0 pypi [conda] torchaudio 2.4.1+cu121 pypi_0 pypi [conda] torchdevice 0.1.1 pypi_0 pypi [conda] torchglyph 0.3.2 pypi_0 pypi [conda] torchmetrics 1.5.0 pypi_0 pypi [conda] torchrua 0.5.1 pypi_0 pypi [conda] torchvision 0.19.1+cu121 pypi_0 pypi [conda] triton 3.0.0 pypi_0 pypi
{ "avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4", "events_url": "https://api.github.com/users/speedcell4/events{/privacy}", "followers_url": "https://api.github.com/users/speedcell4/followers", "following_url": "https://api.github.com/users/speedcell4/following{/other_user}", "gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/speedcell4", "id": 3585459, "login": "speedcell4", "node_id": "MDQ6VXNlcjM1ODU0NTk=", "organizations_url": "https://api.github.com/users/speedcell4/orgs", "received_events_url": "https://api.github.com/users/speedcell4/received_events", "repos_url": "https://api.github.com/users/speedcell4/repos", "site_admin": false, "starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions", "type": "User", "url": "https://api.github.com/users/speedcell4", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7289/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7289/timeline
null
completed
null
null
57.549444
399
https://api.github.com/repos/huggingface/datasets/issues/7286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7286/comments
https://api.github.com/repos/huggingface/datasets/issues/7286/events
https://github.com/huggingface/datasets/issues/7286
2,645,350,151
I_kwDODunzps6drNcH
7,286
Concurrent loading in `load_from_disk` - `num_proc` as a param
{ "avatar_url": "https://avatars.githubusercontent.com/u/5240449?v=4", "events_url": "https://api.github.com/users/unography/events{/privacy}", "followers_url": "https://api.github.com/users/unography/followers", "following_url": "https://api.github.com/users/unography/following{/other_user}", "gists_url": "https://api.github.com/users/unography/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/unography", "id": 5240449, "login": "unography", "node_id": "MDQ6VXNlcjUyNDA0NDk=", "organizations_url": "https://api.github.com/users/unography/orgs", "received_events_url": "https://api.github.com/users/unography/received_events", "repos_url": "https://api.github.com/users/unography/repos", "site_admin": false, "starred_url": "https://api.github.com/users/unography/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unography/subscriptions", "type": "User", "url": "https://api.github.com/users/unography", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[]
2024-11-08T23:21:40Z
2024-11-09T16:14:37Z
2024-11-09T16:14:37Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request https://github.com/huggingface/datasets/pull/6464 mentions a `num_proc` param while loading dataset from disk, but can't find that in the documentation and code anywhere ### Motivation Make loading large datasets from disk faster ### Your contribution Happy to contribute if given pointers
{ "avatar_url": "https://avatars.githubusercontent.com/u/5240449?v=4", "events_url": "https://api.github.com/users/unography/events{/privacy}", "followers_url": "https://api.github.com/users/unography/followers", "following_url": "https://api.github.com/users/unography/following{/other_user}", "gists_url": "https://api.github.com/users/unography/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/unography", "id": 5240449, "login": "unography", "node_id": "MDQ6VXNlcjUyNDA0NDk=", "organizations_url": "https://api.github.com/users/unography/orgs", "received_events_url": "https://api.github.com/users/unography/received_events", "repos_url": "https://api.github.com/users/unography/repos", "site_admin": false, "starred_url": "https://api.github.com/users/unography/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unography/subscriptions", "type": "User", "url": "https://api.github.com/users/unography", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7286/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7286/timeline
null
not_planned
null
null
16.8825
402
https://api.github.com/repos/huggingface/datasets/issues/7266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7266/comments
https://api.github.com/repos/huggingface/datasets/issues/7266/events
https://github.com/huggingface/datasets/issues/7266
2,624,666,087
I_kwDODunzps6ccTnn
7,266
The dataset viewer should be available soon. Please retry later.
{ "avatar_url": "https://avatars.githubusercontent.com/u/39821659?v=4", "events_url": "https://api.github.com/users/viiika/events{/privacy}", "followers_url": "https://api.github.com/users/viiika/followers", "following_url": "https://api.github.com/users/viiika/following{/other_user}", "gists_url": "https://api.github.com/users/viiika/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/viiika", "id": 39821659, "login": "viiika", "node_id": "MDQ6VXNlcjM5ODIxNjU5", "organizations_url": "https://api.github.com/users/viiika/orgs", "received_events_url": "https://api.github.com/users/viiika/received_events", "repos_url": "https://api.github.com/users/viiika/repos", "site_admin": false, "starred_url": "https://api.github.com/users/viiika/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/viiika/subscriptions", "type": "User", "url": "https://api.github.com/users/viiika", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Waiting is all you need. 10 hours later, it works." ]
2024-10-30T16:32:00Z
2024-10-31T03:48:11Z
2024-10-31T03:48:10Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug After waiting for 2 hours, it still presents ``The dataset viewer should be available soon. Please retry later.'' ### Steps to reproduce the bug dataset link: https://huggingface.co/datasets/BryanW/HI_EDIT ### Expected behavior Present the dataset viewer. ### Environment info NA
{ "avatar_url": "https://avatars.githubusercontent.com/u/39821659?v=4", "events_url": "https://api.github.com/users/viiika/events{/privacy}", "followers_url": "https://api.github.com/users/viiika/followers", "following_url": "https://api.github.com/users/viiika/following{/other_user}", "gists_url": "https://api.github.com/users/viiika/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/viiika", "id": 39821659, "login": "viiika", "node_id": "MDQ6VXNlcjM5ODIxNjU5", "organizations_url": "https://api.github.com/users/viiika/orgs", "received_events_url": "https://api.github.com/users/viiika/received_events", "repos_url": "https://api.github.com/users/viiika/repos", "site_admin": false, "starred_url": "https://api.github.com/users/viiika/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/viiika/subscriptions", "type": "User", "url": "https://api.github.com/users/viiika", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7266/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7266/timeline
null
completed
null
null
11.269444
422
https://api.github.com/repos/huggingface/datasets/issues/7241
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7241/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7241/comments
https://api.github.com/repos/huggingface/datasets/issues/7241/events
https://github.com/huggingface/datasets/issues/7241
2,599,899,156
I_kwDODunzps6a91AU
7,241
`push_to_hub` overwrite argument
{ "avatar_url": "https://avatars.githubusercontent.com/u/60838378?v=4", "events_url": "https://api.github.com/users/ceferisbarov/events{/privacy}", "followers_url": "https://api.github.com/users/ceferisbarov/followers", "following_url": "https://api.github.com/users/ceferisbarov/following{/other_user}", "gists_url": "https://api.github.com/users/ceferisbarov/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ceferisbarov", "id": 60838378, "login": "ceferisbarov", "node_id": "MDQ6VXNlcjYwODM4Mzc4", "organizations_url": "https://api.github.com/users/ceferisbarov/orgs", "received_events_url": "https://api.github.com/users/ceferisbarov/received_events", "repos_url": "https://api.github.com/users/ceferisbarov/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ceferisbarov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ceferisbarov/subscriptions", "type": "User", "url": "https://api.github.com/users/ceferisbarov", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi ! Do you mean deleting all the files ? or erasing the repository git history before push_to_hub ?", "Hi! I meant the latter.", "I don't think there is a `huggingface_hub` utility to erase the git history, cc @Wauplin maybe ?", "What is the goal exactly of deleting all the git history without deleting the repo? ", "You can use [`super_squash_commit`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.super_squash_history) to squash all the commits into a single one, hence deleting the git history. This is not exactly what you asked for since it squashes the commits for a specific revision (example: \"all commits on main\"). This means that if other branches exists, they are kept the same. Also if some PRs are already opened on the repo, they will become unmergeable since the commits will have diverted.", "So the solution is:\r\n\r\n```python\r\nfrom huggingface_hub import HfApi\r\nrepo_id = \"username/dataset_name\"\r\nds.push_to_hub(repo_id)\r\nHfApi().super_squash_commit(repo_id)\r\n```\r\n\r\nThis way you erase previous git history to end up with only 1 commit containing your dataset.\r\nStill, I'd be curious why it's important in your case. Is it to save storage space ? or to disallow loading old versions of the data ?", "Thanks, everyone! I am building a new dataset and playing around with column names, splits, etc. Sometimes I push to the hub to share it with other teammates, I don't want those variations to be part of the repo. Deleting the repo from the website takes a little time, but it also loses repo settings that I have set, since I always set it to public with manually approved requests.\r\n\r\nBTW, I had to write `HfApi().super_squash_history(repo_id, repo_type=\"dataset\")`, but otherwise it works.", "@ceferisbarov just to let you know, recreating a gated repo + granting access to your teammates is something that you can automate with something like this (not fully tested but should work):\r\n\r\n```py\r\nfrom huggingface_hub import HfApi\r\n\r\napi = HfApi()\r\napi.delete_repo(repo_id, repo_type=\"dataset\", missing_ok=True)\r\napi.create_repo(repo_id, repo_type=\"dataset\", private=False)\r\napi.update_repo_settings(repo_id, repo_type=\"dataset\", gated=\"manual\")\r\nfor user in [\"user1\", \"user2\"] # list of teammates\r\n api.grant_access(repo_id, user, repo_type=\"dataset\")\r\n```\r\n\r\nI think it'd be a better solution than squashing commits (which is more of a hack), typically if you are using the dataset viewer.", "This is great, @Wauplin. If we can achieve this with HfApi, then we probably don't need to add another parameter to push_to_hub. I am closing the issue." ]
2024-10-20T03:23:26Z
2024-10-24T17:39:08Z
2024-10-24T17:39:08Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request Add an `overwrite` argument to the `push_to_hub` method. ### Motivation I want to overwrite a repo without deleting it on Hugging Face. Is this possible? I couldn't find anything in the documentation or tutorials. ### Your contribution I can create a PR.
{ "avatar_url": "https://avatars.githubusercontent.com/u/60838378?v=4", "events_url": "https://api.github.com/users/ceferisbarov/events{/privacy}", "followers_url": "https://api.github.com/users/ceferisbarov/followers", "following_url": "https://api.github.com/users/ceferisbarov/following{/other_user}", "gists_url": "https://api.github.com/users/ceferisbarov/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ceferisbarov", "id": 60838378, "login": "ceferisbarov", "node_id": "MDQ6VXNlcjYwODM4Mzc4", "organizations_url": "https://api.github.com/users/ceferisbarov/orgs", "received_events_url": "https://api.github.com/users/ceferisbarov/received_events", "repos_url": "https://api.github.com/users/ceferisbarov/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ceferisbarov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ceferisbarov/subscriptions", "type": "User", "url": "https://api.github.com/users/ceferisbarov", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7241/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7241/timeline
null
completed
null
null
110.261667
446
https://api.github.com/repos/huggingface/datasets/issues/7208
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7208/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7208/comments
https://api.github.com/repos/huggingface/datasets/issues/7208/events
https://github.com/huggingface/datasets/issues/7208
2,575,484,256
I_kwDODunzps6ZgsVg
7,208
Iterable dataset.filter should not override features
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "closed by https://github.com/huggingface/datasets/pull/7209, thanks @alex-hh !" ]
2024-10-09T10:23:45Z
2024-10-09T16:08:46Z
2024-10-09T16:08:45Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When calling filter on an iterable dataset, the features get set to None ### Steps to reproduce the bug import numpy as np import time from datasets import Dataset, Features, Array3D ```python features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float32")}) dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,10,10), dtype=np.float32) for x in [2000,1000]*25] for i in range(2)}, features=features) ds = dataset.to_iterable_dataset() orig_column_names = ds.column_names ds = ds.filter(lambda x: True) assert ds.column_names == orig_column_names ``` ### Expected behavior Filter should preserve features information ### Environment info 3.0.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7208/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7208/timeline
null
completed
null
null
5.75
474
https://api.github.com/repos/huggingface/datasets/issues/7194
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7194/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7194/comments
https://api.github.com/repos/huggingface/datasets/issues/7194/events
https://github.com/huggingface/datasets/issues/7194
2,563,364,199
I_kwDODunzps6YydVn
7,194
datasets.exceptions.DatasetNotFoundError for private dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/20212179?v=4", "events_url": "https://api.github.com/users/kdutia/events{/privacy}", "followers_url": "https://api.github.com/users/kdutia/followers", "following_url": "https://api.github.com/users/kdutia/following{/other_user}", "gists_url": "https://api.github.com/users/kdutia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kdutia", "id": 20212179, "login": "kdutia", "node_id": "MDQ6VXNlcjIwMjEyMTc5", "organizations_url": "https://api.github.com/users/kdutia/orgs", "received_events_url": "https://api.github.com/users/kdutia/received_events", "repos_url": "https://api.github.com/users/kdutia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kdutia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kdutia/subscriptions", "type": "User", "url": "https://api.github.com/users/kdutia", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Actually there is no such dataset available, that is why you are getting that error.", "Fixed with @kdutia in Slack chat. Generating a new token fixed this issue. " ]
2024-10-03T07:49:36Z
2024-10-03T10:09:28Z
2024-10-03T10:09:28Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug The following Python code tries to download a private dataset and fails with the error `datasets.exceptions.DatasetNotFoundError: Dataset 'ClimatePolicyRadar/all-document-text-data-weekly' doesn't exist on the Hub or cannot be accessed.`. Downloading a public dataset doesn't work. ``` py from datasets import load_dataset _ = load_dataset("ClimatePolicyRadar/all-document-text-data-weekly") ``` This seems to be just an issue with my machine config as the code above works with a colleague's machine. So far I have tried: - logging back out and in from the Huggingface CLI using `huggingface-cli logout` - manually removing the token cache at `/Users/kalyan/.cache/huggingface/token` (found using `huggingface-cli env`) - manually passing a token in `load_dataset` My output of `huggingface-cli whoami`: ``` kdutia orgs: ClimatePolicyRadar ``` ### Steps to reproduce the bug ``` python Python 3.12.2 (main, Feb 6 2024, 20:19:44) [Clang 15.0.0 (clang-1500.1.0.2.5)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from datasets import load_dataset >>> _ = load_dataset("ClimatePolicyRadar/all-document-text-data-weekly") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/kalyan/Library/Caches/pypoetry/virtualenvs/open-data-cnKQNmjn-py3.12/lib/python3.12/site-packages/datasets/load.py", line 2074, in load_dataset builder_instance = load_dataset_builder( ^^^^^^^^^^^^^^^^^^^^^ File "/Users/kalyan/Library/Caches/pypoetry/virtualenvs/open-data-cnKQNmjn-py3.12/lib/python3.12/site-packages/datasets/load.py", line 1795, in load_dataset_builder dataset_module = dataset_module_factory( ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/kalyan/Library/Caches/pypoetry/virtualenvs/open-data-cnKQNmjn-py3.12/lib/python3.12/site-packages/datasets/load.py", line 1659, in dataset_module_factory raise e1 from None File "/Users/kalyan/Library/Caches/pypoetry/virtualenvs/open-data-cnKQNmjn-py3.12/lib/python3.12/site-packages/datasets/load.py", line 1597, in dataset_module_factory raise DatasetNotFoundError(f"Dataset '{path}' doesn't exist on the Hub or cannot be accessed.") from e datasets.exceptions.DatasetNotFoundError: Dataset 'ClimatePolicyRadar/all-document-text-data-weekly' doesn't exist on the Hub or cannot be accessed. >>> ``` ### Expected behavior The dataset downloads successfully. ### Environment info From `huggingface-cli env`: ``` - huggingface_hub version: 0.25.1 - Platform: macOS-14.2.1-arm64-arm-64bit - Python version: 3.12.2 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Running in Google Colab Enterprise ?: No - Token path ?: /Users/kalyan/.cache/huggingface/token - Has saved token ?: True - Who am I ?: kdutia - Configured git credential helpers: osxkeychain - FastAI: N/A - Tensorflow: N/A - Torch: N/A - Jinja2: 3.1.4 - Graphviz: N/A - keras: N/A - Pydot: N/A - Pillow: N/A - hf_transfer: N/A - gradio: N/A - tensorboard: N/A - numpy: 2.1.1 - pydantic: N/A - aiohttp: 3.10.8 - ENDPOINT: https://huggingface.co - HF_HUB_CACHE: /Users/kalyan/.cache/huggingface/hub - HF_ASSETS_CACHE: /Users/kalyan/.cache/huggingface/assets - HF_TOKEN_PATH: /Users/kalyan/.cache/huggingface/token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False - HF_HUB_ETAG_TIMEOUT: 10 - HF_HUB_DOWNLOAD_TIMEOUT: 10 ``` from `datasets-cli env`: ``` - `datasets` version: 3.0.1 - Platform: macOS-14.2.1-arm64-arm-64bit - Python version: 3.12.2 - `huggingface_hub` version: 0.25.1 - PyArrow version: 17.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4", "events_url": "https://api.github.com/users/davanstrien/events{/privacy}", "followers_url": "https://api.github.com/users/davanstrien/followers", "following_url": "https://api.github.com/users/davanstrien/following{/other_user}", "gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davanstrien", "id": 8995957, "login": "davanstrien", "node_id": "MDQ6VXNlcjg5OTU5NTc=", "organizations_url": "https://api.github.com/users/davanstrien/orgs", "received_events_url": "https://api.github.com/users/davanstrien/received_events", "repos_url": "https://api.github.com/users/davanstrien/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions", "type": "User", "url": "https://api.github.com/users/davanstrien", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7194/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7194/timeline
null
completed
null
null
2.331111
488
https://api.github.com/repos/huggingface/datasets/issues/7192
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7192/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7192/comments
https://api.github.com/repos/huggingface/datasets/issues/7192/events
https://github.com/huggingface/datasets/issues/7192
2,562,289,642
I_kwDODunzps6YuW_q
7,192
Add repeat() for iterable datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "perhaps concatenate_datasets can already be used to achieve almost the same effect? ", "`concatenate_datasets` does the job when there is a finite number of repetitions, but in case of `.repeat()` forever we need a new logic in `iterable_dataset.py`", "done in https://github.com/huggingface/datasets/pull/7198" ]
2024-10-02T17:48:13Z
2025-03-18T10:48:33Z
2025-03-18T10:48:32Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request It would be useful to be able to straightforwardly repeat iterable datasets indefinitely, to provide complete control over starting and ending of iteration to the user. An IterableDataset.repeat(n) function could do this automatically ### Motivation This feature was discussed in this issue https://github.com/huggingface/datasets/issues/7147, and would resolve the need to use the hack of interleave datasets with probability 0 as a simple way to achieve this functionality. An additional benefit might be the simplification of the use of iterable datasets in a distributed setting: If the user can assume that datasets will repeat indefinitely, then issues around different numbers of samples appearing on different devices (e.g. https://github.com/huggingface/datasets/issues/6437, https://github.com/huggingface/datasets/issues/6594, https://github.com/huggingface/datasets/issues/6623, https://github.com/huggingface/datasets/issues/6719) can potentially be straightforwardly resolved by simply doing: ids.repeat(None).take(n_samples_per_epoch) ### Your contribution I'm not familiar enough with the codebase to assess how straightforward this would be to implement. If it might be very straightforward, I could possibly have a go.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7192/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7192/timeline
null
completed
null
null
4,001.005278
490
https://api.github.com/repos/huggingface/datasets/issues/7186
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7186/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7186/comments
https://api.github.com/repos/huggingface/datasets/issues/7186/events
https://github.com/huggingface/datasets/issues/7186
2,560,323,917
I_kwDODunzps6Ym3FN
7,186
pinning `dill<0.3.9` without pinning `multiprocess`
{ "avatar_url": "https://avatars.githubusercontent.com/u/38372682?v=4", "events_url": "https://api.github.com/users/shubhbapna/events{/privacy}", "followers_url": "https://api.github.com/users/shubhbapna/followers", "following_url": "https://api.github.com/users/shubhbapna/following{/other_user}", "gists_url": "https://api.github.com/users/shubhbapna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shubhbapna", "id": 38372682, "login": "shubhbapna", "node_id": "MDQ6VXNlcjM4MzcyNjgy", "organizations_url": "https://api.github.com/users/shubhbapna/orgs", "received_events_url": "https://api.github.com/users/shubhbapna/received_events", "repos_url": "https://api.github.com/users/shubhbapna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shubhbapna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shubhbapna/subscriptions", "type": "User", "url": "https://api.github.com/users/shubhbapna", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-10-01T22:29:32Z
2024-10-02T06:08:24Z
2024-10-02T06:08:24Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug The [latest `multiprocess` release](https://github.com/uqfoundation/multiprocess/releases/tag/0.70.17) requires `dill>=0.3.9` which causes issues when installing `datasets` without backtracking during package version resolution. Is it possible to add a pin for multiprocess so something like `multiprocess<=0.70.16` so that the `dill` version is compatible? ### Steps to reproduce the bug NA ### Expected behavior NA ### Environment info NA
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7186/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7186/timeline
null
completed
null
null
7.647778
496
https://api.github.com/repos/huggingface/datasets/issues/7185
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7185/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7185/comments
https://api.github.com/repos/huggingface/datasets/issues/7185/events
https://github.com/huggingface/datasets/issues/7185
2,558,508,748
I_kwDODunzps6Yf77M
7,185
CI benchmarks are broken
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
closed
false
null
[]
null
[ "Fixed by #7205" ]
2024-10-01T08:16:08Z
2024-10-09T16:07:48Z
2024-10-09T16:07:48Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Since Aug 30, 2024, CI benchmarks are broken: https://github.com/huggingface/datasets/actions/runs/11108421214/job/30861323975 ``` {"level":"error","message":"Resource not accessible by integration","name":"HttpError","request":{"body":"{\"body\":\"<details>\\n<summary>Show benchmarks</summary>\\n\\nPyArrow==8.0.0\\n\\n<details>\\n<summary>Show updated benchmarks!</summary>\\n\\n### Benchmark: benchmark_array_xd.json\\n\\n| metric | read_batch_formatted_as_numpy after write_array2d | ... "headers":{"accept":"application/vnd.github.v3+json","authorization":"token [REDACTED]","content-type":"application/json; charset=utf-8","user-agent":"octokit-rest.js/18.0.0 octokit-core.js/3.6.0 Node.js/16.20.2 (linux; x64)"},"method":"POST","request":{"agent":{"_events":{},"_eventsCount":2,"cache": ... "response":{"data":{"documentation_url":"https://docs.github.com/rest/issues/comments#create-an-issue-comment","message":"Resource not accessible by integration","status":"403"}, ... "stack":"HttpError: Resource not accessible by integration\n at /usr/lib/node_modules/@dvcorg/cml/node_modules/@octokit/request/dist-node/index.js:86:21\n at processTicksAndRejections (node:internal/process/task_queues:96:5)\n at async Job.doExecute (/usr/lib/node_modules/@dvcorg/cml/node_modules/bottleneck/light.js:405:18)","status":403} ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7185/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7185/timeline
null
completed
null
null
199.861111
497
https://api.github.com/repos/huggingface/datasets/issues/7183
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7183/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7183/comments
https://api.github.com/repos/huggingface/datasets/issues/7183/events
https://github.com/huggingface/datasets/issues/7183
2,556,789,055
I_kwDODunzps6YZYE_
7,183
CI is broken for deps-latest
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2024-09-30T14:02:07Z
2024-09-30T14:38:58Z
2024-09-30T14:38:58Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
See: https://github.com/huggingface/datasets/actions/runs/11106149906/job/30853879890 ``` =========================== short test summary info ============================ FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_filter_caching_on_disk - AssertionError: Lists differ: [{'fi[44 chars] {'filename': '/tmp/tmp6xcyyjs4/cache-9533fe2601cd3e48.arrow'}] != [{'fi[44 chars] {'filename': '/tmp/tmp6xcyyjs4/cache-e6e0a8b830976289.arrow'}] First differing element 1: {'filename': '/tmp/tmp6xcyyjs4/cache-9533fe2601cd3e48.arrow'} {'filename': '/tmp/tmp6xcyyjs4/cache-e6e0a8b830976289.arrow'} [{'filename': '/tmp/tmp6xcyyjs4/dataset0.arrow'}, - {'filename': '/tmp/tmp6xcyyjs4/cache-9533fe2601cd3e48.arrow'}] ? ^^^^^ -------- + {'filename': '/tmp/tmp6xcyyjs4/cache-e6e0a8b830976289.arrow'}] ? ++++++++++ ^^ + FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_caching_on_disk - AssertionError: Lists differ: [{'filename': '/tmp/tmp5gxrti_n/cache-e58d327daec8626f.arrow'}] != [{'filename': '/tmp/tmp5gxrti_n/cache-d87234c5763e54a3.arrow'}] First differing element 0: {'filename': '/tmp/tmp5gxrti_n/cache-e58d327daec8626f.arrow'} {'filename': '/tmp/tmp5gxrti_n/cache-d87234c5763e54a3.arrow'} - [{'filename': '/tmp/tmp5gxrti_n/cache-e58d327daec8626f.arrow'}] ? ^^ ----------- + [{'filename': '/tmp/tmp5gxrti_n/cache-d87234c5763e54a3.arrow'}] ? +++++++++++ ^^ FAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_regex - NameError: name 'log' is not defined FAILED tests/test_fingerprint.py::RecurseHashTest::test_hash_ignores_line_definition_of_function - AssertionError: '52e56ee04ad92499' != '0a4f75cec280f634' - 52e56ee04ad92499 + 0a4f75cec280f634 FAILED tests/test_fingerprint.py::RecurseHashTest::test_hash_ipython_function - AssertionError: 'a6bd2041ca63d6c0' != '517bf36b7eecdef5' - a6bd2041ca63d6c0 + 517bf36b7eecdef5 FAILED tests/test_fingerprint.py::HashingTest::test_hash_tiktoken_encoding - NameError: name 'log' is not defined FAILED tests/test_fingerprint.py::HashingTest::test_hash_torch_compiled_module - NameError: name 'log' is not defined FAILED tests/test_fingerprint.py::HashingTest::test_hash_torch_generator - NameError: name 'log' is not defined FAILED tests/test_fingerprint.py::HashingTest::test_hash_torch_tensor - NameError: name 'log' is not defined FAILED tests/test_fingerprint.py::HashingTest::test_set_doesnt_depend_on_order - NameError: name 'log' is not defined FAILED tests/test_fingerprint.py::HashingTest::test_set_stable - NameError: name 'log' is not defined ERROR tests/test_iterable_dataset.py::test_iterable_dataset_from_file - NameError: name 'log' is not defined = 11 failed, 2850 passed, 3 skipped, 23 warnings, 1 error in 191.06s (0:03:11) = ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7183/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7183/timeline
null
completed
null
null
0.614167
499
https://api.github.com/repos/huggingface/datasets/issues/7180
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7180/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7180/comments
https://api.github.com/repos/huggingface/datasets/issues/7180/events
https://github.com/huggingface/datasets/issues/7180
2,554,244,750
I_kwDODunzps6YPq6O
7,180
Memory leak when wrapping datasets into PyTorch Dataset without explicit deletion
{ "avatar_url": "https://avatars.githubusercontent.com/u/38123329?v=4", "events_url": "https://api.github.com/users/iamwangyabin/events{/privacy}", "followers_url": "https://api.github.com/users/iamwangyabin/followers", "following_url": "https://api.github.com/users/iamwangyabin/following{/other_user}", "gists_url": "https://api.github.com/users/iamwangyabin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iamwangyabin", "id": 38123329, "login": "iamwangyabin", "node_id": "MDQ6VXNlcjM4MTIzMzI5", "organizations_url": "https://api.github.com/users/iamwangyabin/orgs", "received_events_url": "https://api.github.com/users/iamwangyabin/received_events", "repos_url": "https://api.github.com/users/iamwangyabin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iamwangyabin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iamwangyabin/subscriptions", "type": "User", "url": "https://api.github.com/users/iamwangyabin", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "> I've encountered a memory leak when wrapping the HuggingFace dataset into a PyTorch Dataset. The RAM usage constantly increases during iteration if items are not explicitly deleted after use.\r\n\r\nDatasets are memory mapped so they work like SWAP memory. In particular as long as you have RAM available the data will stay in RAM, and get paged out once your system needs RAM for something else (no OOM).\r\n\r\nrelated: https://github.com/huggingface/datasets/issues/4883" ]
2024-09-28T14:00:47Z
2024-09-30T12:07:56Z
2024-09-30T12:07:56Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug I've encountered a memory leak when wrapping the HuggingFace dataset into a PyTorch Dataset. The RAM usage constantly increases during iteration if items are not explicitly deleted after use. ### Steps to reproduce the bug Steps to reproduce: Create a PyTorch Dataset wrapper for 'nebula/cc12m': ```` from torch.utils.data import Dataset from tqdm import tqdm from datasets import load_dataset from torchvision import transforms Image.MAX_IMAGE_PIXELS = None class CC12M(Dataset): def __init__(self, path_or_name='nebula/cc12m', split='train', transform=None, single_caption=True): self.raw_dataset = load_dataset(path_or_name)[split] if transform is None: self.transform = transforms.Compose([ transforms.Resize((224, 224)), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize( mean=[0.48145466, 0.4578275, 0.40821073], std=[0.26862954, 0.26130258, 0.27577711] ) ]) else: self.transform = transforms.Compose(transform) self.single_caption = single_caption self.length = len(self.raw_dataset) def __len__(self): return self.length def __getitem__(self, index): item = self.raw_dataset[index] caption = item['txt'] with io.BytesIO(item['webp']) as buffer: image = Image.open(buffer).convert('RGB') if self.transform: image = self.transform(image) # del item # Uncomment this line to prevent the memory leak return image, caption ```` Iterate through the dataset without the del item line in __getitem__. Observe RAM usage increasing constantly. Add del item at the end of __getitem__: ``` def __getitem__(self, index): item = self.raw_dataset[index] caption = item['txt'] with io.BytesIO(item['webp']) as buffer: image = Image.open(buffer).convert('RGB') if self.transform: image = self.transform(image) del item # This line prevents the memory leak return image, caption ``` Iterate through the dataset again and observe that RAM usage remains stable. ### Expected behavior Expected behavior: RAM usage should remain stable during iteration without needing to explicitly delete items. Actual behavior: RAM usage constantly increases unless items are explicitly deleted after use ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-4.18.0-513.5.1.el8_9.x86_64-x86_64-with-glibc2.28 - Python version: 3.12.4 - `huggingface_hub` version: 0.24.6 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.6.1
{ "avatar_url": "https://avatars.githubusercontent.com/u/38123329?v=4", "events_url": "https://api.github.com/users/iamwangyabin/events{/privacy}", "followers_url": "https://api.github.com/users/iamwangyabin/followers", "following_url": "https://api.github.com/users/iamwangyabin/following{/other_user}", "gists_url": "https://api.github.com/users/iamwangyabin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iamwangyabin", "id": 38123329, "login": "iamwangyabin", "node_id": "MDQ6VXNlcjM4MTIzMzI5", "organizations_url": "https://api.github.com/users/iamwangyabin/orgs", "received_events_url": "https://api.github.com/users/iamwangyabin/received_events", "repos_url": "https://api.github.com/users/iamwangyabin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iamwangyabin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iamwangyabin/subscriptions", "type": "User", "url": "https://api.github.com/users/iamwangyabin", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7180/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7180/timeline
null
completed
null
null
46.119167
502
https://api.github.com/repos/huggingface/datasets/issues/7178
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7178/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7178/comments
https://api.github.com/repos/huggingface/datasets/issues/7178/events
https://github.com/huggingface/datasets/issues/7178
2,552,378,330
I_kwDODunzps6YIjPa
7,178
Support Python 3.11
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-09-27T08:50:47Z
2024-10-08T16:21:04Z
2024-10-08T16:21:04Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Support Python 3.11: https://peps.python.org/pep-0664/
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7178/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7178/timeline
null
completed
null
null
271.504722
504
https://api.github.com/repos/huggingface/datasets/issues/7175
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7175/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7175/comments
https://api.github.com/repos/huggingface/datasets/issues/7175/events
https://github.com/huggingface/datasets/issues/7175
2,550,957,337
I_kwDODunzps6YDIUZ
7,175
[FSTimeoutError] load_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/53268607?v=4", "events_url": "https://api.github.com/users/cosmo3769/events{/privacy}", "followers_url": "https://api.github.com/users/cosmo3769/followers", "following_url": "https://api.github.com/users/cosmo3769/following{/other_user}", "gists_url": "https://api.github.com/users/cosmo3769/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cosmo3769", "id": 53268607, "login": "cosmo3769", "node_id": "MDQ6VXNlcjUzMjY4NjA3", "organizations_url": "https://api.github.com/users/cosmo3769/orgs", "received_events_url": "https://api.github.com/users/cosmo3769/received_events", "repos_url": "https://api.github.com/users/cosmo3769/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cosmo3769/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cosmo3769/subscriptions", "type": "User", "url": "https://api.github.com/users/cosmo3769", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Is this `FSTimeoutError` due to download network issue from remote resource (from where it is being accessed)?", "It seems to happen for all datasets, not just a specific one, and especially for versions after 3.0. (3.0.0, 3.0.1 have this problem)\r\n\r\nI had the same error on a different dataset, but after downgrading to datasets==2.21.0, the problem was solved.", "Same as https://github.com/huggingface/datasets/issues/7164\r\n\r\nThis dataset is made of a python script that downloads data from elsewhere than HF, so availability depends on the original host. Ultimately it would be nice to host the files of this dataset on HF\r\n\r\nin `datasets` <3.0 there were lots of mechanisms that got removed after the decision to make datasets with python loading scripts legacy for security and maintenance reasons (we only do very basic support now)", "@lhoestq Thank you for the clarification! Closing the issue.", "I'm getting this too, and also at 5 minutes. But for `CSTR-Edinburgh/vctk`, so it's not just this dataset, it seems to be a timeout that was introduced and needs to be raised. The progress bar was moving along just fine before the timeout, and I get more or less of it depending on how fast the network is.", "You can change the `aiohttp` timeout from 5min to 1h like this:\r\n\r\n```python\r\nimport datasets, aiohttp\r\ndataset = datasets.load_dataset(\r\n dataset_name,\r\n storage_options={'client_kwargs': {'timeout': aiohttp.ClientTimeout(total=3600)}}\r\n)\r\n```", "@JonasLoos Solution solved a download timeout error I received when downloading `\"HuggingFaceM4/VQAv2\"` 🎉 " ]
2024-09-26T15:42:29Z
2025-02-01T09:09:35Z
2024-09-30T17:28:35Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When using `load_dataset`to load [HuggingFaceM4/VQAv2](https://huggingface.co/datasets/HuggingFaceM4/VQAv2), I am getting `FSTimeoutError`. ### Error ``` TimeoutError: The above exception was the direct cause of the following exception: FSTimeoutError Traceback (most recent call last) [/usr/local/lib/python3.10/dist-packages/fsspec/asyn.py](https://klh9mr78js-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20240924-060116_RC00_678132060#) in sync(loop, func, timeout, *args, **kwargs) 99 if isinstance(return_result, asyncio.TimeoutError): 100 # suppress asyncio.TimeoutError, raise FSTimeoutError --> 101 raise FSTimeoutError from return_result 102 elif isinstance(return_result, BaseException): 103 raise return_result FSTimeoutError: ``` It usually fails around 5-6 GB. <img width="847" alt="Screenshot 2024-09-26 at 9 10 19 PM" src="https://github.com/user-attachments/assets/ff91995a-fb55-4de6-8214-94025d6c8470"> ### Steps to reproduce the bug To reproduce it, run this in colab notebook: ``` !pip install -q -U datasets from datasets import load_dataset ds = load_dataset('HuggingFaceM4/VQAv2', split="train[:10%]") ``` ### Expected behavior It should download properly. ### Environment info Using Colab Notebook.
{ "avatar_url": "https://avatars.githubusercontent.com/u/53268607?v=4", "events_url": "https://api.github.com/users/cosmo3769/events{/privacy}", "followers_url": "https://api.github.com/users/cosmo3769/followers", "following_url": "https://api.github.com/users/cosmo3769/following{/other_user}", "gists_url": "https://api.github.com/users/cosmo3769/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cosmo3769", "id": 53268607, "login": "cosmo3769", "node_id": "MDQ6VXNlcjUzMjY4NjA3", "organizations_url": "https://api.github.com/users/cosmo3769/orgs", "received_events_url": "https://api.github.com/users/cosmo3769/received_events", "repos_url": "https://api.github.com/users/cosmo3769/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cosmo3769/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cosmo3769/subscriptions", "type": "User", "url": "https://api.github.com/users/cosmo3769", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7175/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7175/timeline
null
completed
null
null
97.768333
507
https://api.github.com/repos/huggingface/datasets/issues/7171
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7171/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7171/comments
https://api.github.com/repos/huggingface/datasets/issues/7171/events
https://github.com/huggingface/datasets/issues/7171
2,549,738,919
I_kwDODunzps6X-e2n
7,171
CI is broken: No solution found when resolving dependencies
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-09-26T07:24:58Z
2024-09-26T08:05:41Z
2024-09-26T08:05:41Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
See: https://github.com/huggingface/datasets/actions/runs/11046967444/job/30687294297 ``` Run uv pip install --system -r additional-tests-requirements.txt --no-deps × No solution found when resolving dependencies: ╰─▶ Because the current Python version (3.8.18) does not satisfy Python>=3.9 and torchdata==0.10.0a0+1a98f21 depends on Python>=3.9, we can conclude that torchdata==0.10.0a0+1a98f21 cannot be used. And because only torchdata==0.10.0a0+1a98f21 is available and you require torchdata, we can conclude that your requirements are unsatisfiable. Error: Process completed with exit code 1. ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7171/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7171/timeline
null
completed
null
null
0.678611
511
https://api.github.com/repos/huggingface/datasets/issues/7169
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7169/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7169/comments
https://api.github.com/repos/huggingface/datasets/issues/7169/events
https://github.com/huggingface/datasets/issues/7169
2,546,894,076
I_kwDODunzps6XzoT8
7,169
JSON lines with missing columns raise CastError
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-09-25T04:43:28Z
2024-09-26T06:42:08Z
2024-09-26T06:42:08Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
JSON lines with missing columns raise CastError: > CastError: Couldn't cast ... to ... because column names don't match Related to: - #7159 - #7161
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7169/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7169/timeline
null
completed
null
null
25.977778
513
https://api.github.com/repos/huggingface/datasets/issues/7168
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7168/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7168/comments
https://api.github.com/repos/huggingface/datasets/issues/7168/events
https://github.com/huggingface/datasets/issues/7168
2,546,710,631
I_kwDODunzps6Xy7hn
7,168
sd1.5 diffusers controlnet training script gives new error
{ "avatar_url": "https://avatars.githubusercontent.com/u/90132896?v=4", "events_url": "https://api.github.com/users/Night1099/events{/privacy}", "followers_url": "https://api.github.com/users/Night1099/followers", "following_url": "https://api.github.com/users/Night1099/following{/other_user}", "gists_url": "https://api.github.com/users/Night1099/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Night1099", "id": 90132896, "login": "Night1099", "node_id": "MDQ6VXNlcjkwMTMyODk2", "organizations_url": "https://api.github.com/users/Night1099/orgs", "received_events_url": "https://api.github.com/users/Night1099/received_events", "repos_url": "https://api.github.com/users/Night1099/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Night1099/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Night1099/subscriptions", "type": "User", "url": "https://api.github.com/users/Night1099", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "not sure why the issue is formatting oddly", "I guess this is a dupe of\r\n\r\nhttps://github.com/huggingface/datasets/issues/7071", "this turned out to be because of a bad image in dataset" ]
2024-09-25T01:42:49Z
2024-09-30T05:24:03Z
2024-09-30T05:24:02Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug This will randomly pop up during training now ``` Traceback (most recent call last): File "/workspace/diffusers/examples/controlnet/train_controlnet.py", line 1192, in <module> main(args) File "/workspace/diffusers/examples/controlnet/train_controlnet.py", line 1041, in main for step, batch in enumerate(train_dataloader): File "/usr/local/lib/python3.11/dist-packages/accelerate/data_loader.py", line 561, in __iter__ next_batch = next(dataloader_iter) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/utils/data/dataloader.py", line 630, in __next__ data = self._next_data() ^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/utils/data/dataloader.py", line 673, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/utils/data/_utils/fetch.py", line 50, in fetch data = self.dataset.__getitems__(possibly_batched_index) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 2746, in __getitems__ batch = self.__getitem__(keys) ^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 2742, in __getitem__ return self._getitem(key) ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 2727, in _getitem formatted_output = format_table( ^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 639, in format_table return formatter(pa_table, query_type=query_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 407, in __call__ return self.format_batch(pa_table) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 521, in format_batch batch = self.python_features_decoder.decode_batch(batch) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 228, in decode_batch return self.features.decode_batch(batch) if self.features else batch ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/features/features.py", line 2084, in decode_batch [ File "/usr/local/lib/python3.11/dist-packages/datasets/features/features.py", line 2085, in <listcomp> decode_nested_example(self[column_name], value, token_per_repo_id=token_per_repo_id) File "/usr/local/lib/python3.11/dist-packages/datasets/features/features.py", line 1403, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/features/image.py", line 188, in decode_example image.load() # to avoid "Too many open files" errors ``` ### Steps to reproduce the bug Train on diffusers sd1.5 controlnet example script This will pop up randomly, you can see in wandb below when i manually resume run everytime this error appears ![image](https://github.com/user-attachments/assets/87e9a6af-cb3c-4398-82e7-d6a90add8d31) ### Expected behavior Training to continue without above error ### Environment info - datasets version: 3.0.0 - Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35 - Python version: 3.11.9 - huggingface_hub version: 0.25.1 - PyArrow version: 17.0.0 - Pandas version: 2.2.3 - fsspec version: 2024.6.1 Training on 4090
{ "avatar_url": "https://avatars.githubusercontent.com/u/90132896?v=4", "events_url": "https://api.github.com/users/Night1099/events{/privacy}", "followers_url": "https://api.github.com/users/Night1099/followers", "following_url": "https://api.github.com/users/Night1099/following{/other_user}", "gists_url": "https://api.github.com/users/Night1099/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Night1099", "id": 90132896, "login": "Night1099", "node_id": "MDQ6VXNlcjkwMTMyODk2", "organizations_url": "https://api.github.com/users/Night1099/orgs", "received_events_url": "https://api.github.com/users/Night1099/received_events", "repos_url": "https://api.github.com/users/Night1099/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Night1099/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Night1099/subscriptions", "type": "User", "url": "https://api.github.com/users/Night1099", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7168/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7168/timeline
null
completed
null
null
123.686944
514
https://api.github.com/repos/huggingface/datasets/issues/7167
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7167/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7167/comments
https://api.github.com/repos/huggingface/datasets/issues/7167/events
https://github.com/huggingface/datasets/issues/7167
2,546,708,014
I_kwDODunzps6Xy64u
7,167
Error Mapping on sd3, sdxl and upcoming flux controlnet training scripts in diffusers
{ "avatar_url": "https://avatars.githubusercontent.com/u/90132896?v=4", "events_url": "https://api.github.com/users/Night1099/events{/privacy}", "followers_url": "https://api.github.com/users/Night1099/followers", "following_url": "https://api.github.com/users/Night1099/following{/other_user}", "gists_url": "https://api.github.com/users/Night1099/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Night1099", "id": 90132896, "login": "Night1099", "node_id": "MDQ6VXNlcjkwMTMyODk2", "organizations_url": "https://api.github.com/users/Night1099/orgs", "received_events_url": "https://api.github.com/users/Night1099/received_events", "repos_url": "https://api.github.com/users/Night1099/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Night1099/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Night1099/subscriptions", "type": "User", "url": "https://api.github.com/users/Night1099", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "this is happening on large datasets, if anyone happens upon this i was able to fix by changing\r\n\r\n```\r\ntrain_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)\r\n```\r\n\r\nto\r\n\r\n```\r\ntrain_dataset = train_dataset.map(compute_embeddings_fn, batched=True, batch_size=16, new_fingerprint=new_fingerprint)\r\n```" ]
2024-09-25T01:39:51Z
2024-09-30T05:28:15Z
2024-09-30T05:28:04Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug ``` Map: 6%|██████ | 8000/138120 [19:27<5:16:36, 6.85 examples/s] Traceback (most recent call last): File "/workspace/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1416, in <module> main(args) File "/workspace/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1132, in main train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 560, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 3035, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 3461, in _map_single writer.write_batch(batch) File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 567, in write_batch self.write_table(pa_table, writer_batch_size) File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 579, in write_table pa_table = pa_table.combine_chunks() ^^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/table.pxi", line 4387, in pyarrow.lib.Table.combine_chunks File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays Traceback (most recent call last): File "/usr/local/bin/accelerate", line 8, in <module> sys.exit(main()) ^^^^^^ File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/accelerate_cli.py", line 48, in main args.func(args) File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/launch.py", line 1174, in launch_command simple_launcher(args) File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/launch.py", line 769, in simple_launcher ``` ### Steps to reproduce the bug The dataset has no problem training on sd1.5 controlnet train script ### Expected behavior Script not randomly erroing with error above ### Environment info - `datasets` version: 3.0.0 - Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35 - Python version: 3.11.9 - `huggingface_hub` version: 0.25.1 - PyArrow version: 17.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1 training on A100
{ "avatar_url": "https://avatars.githubusercontent.com/u/90132896?v=4", "events_url": "https://api.github.com/users/Night1099/events{/privacy}", "followers_url": "https://api.github.com/users/Night1099/followers", "following_url": "https://api.github.com/users/Night1099/following{/other_user}", "gists_url": "https://api.github.com/users/Night1099/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Night1099", "id": 90132896, "login": "Night1099", "node_id": "MDQ6VXNlcjkwMTMyODk2", "organizations_url": "https://api.github.com/users/Night1099/orgs", "received_events_url": "https://api.github.com/users/Night1099/received_events", "repos_url": "https://api.github.com/users/Night1099/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Night1099/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Night1099/subscriptions", "type": "User", "url": "https://api.github.com/users/Night1099", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7167/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7167/timeline
null
completed
null
null
123.803611
515
https://api.github.com/repos/huggingface/datasets/issues/7163
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7163/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7163/comments
https://api.github.com/repos/huggingface/datasets/issues/7163/events
https://github.com/huggingface/datasets/issues/7163
2,542,361,234
I_kwDODunzps6XiVqS
7,163
Set explicit seed in iterable dataset ddp shuffling example
{ "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alex-hh", "id": 5719745, "login": "alex-hh", "node_id": "MDQ6VXNlcjU3MTk3NDU=", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "repos_url": "https://api.github.com/users/alex-hh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "type": "User", "url": "https://api.github.com/users/alex-hh", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "thanks for reporting !" ]
2024-09-23T11:34:06Z
2024-09-24T14:40:15Z
2024-09-24T14:40:15Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug In the examples section of the iterable dataset docs https://huggingface.co/docs/datasets/en/package_reference/main_classes#datasets.IterableDataset the ddp example shuffles without seeding ```python from datasets.distributed import split_dataset_by_node ids = ds.to_iterable_dataset(num_shards=512) ids = ids.shuffle(buffer_size=10_000) # will shuffle the shards order and use a shuffle buffer when you start iterating ids = split_dataset_by_node(ds, world_size=8, rank=0) # will keep only 512 / 8 = 64 shards from the shuffled lists of shards when you start iterating dataloader = torch.utils.data.DataLoader(ids, num_workers=4) # will assign 64 / 4 = 16 shards from this node's list of shards to each worker when you start iterating for example in ids: pass ``` This code would - I think - raise an error due to the lack of an explicit seed: https://github.com/huggingface/datasets/blob/2eb4edb97e1a6af2ea62738ec58afbd3812fc66e/src/datasets/iterable_dataset.py#L1707-L1711 ### Steps to reproduce the bug Run example code ### Expected behavior Add explicit seeding to example code ### Environment info latest datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7163/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7163/timeline
null
completed
null
null
27.1025
519
https://api.github.com/repos/huggingface/datasets/issues/7161
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7161/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7161/comments
https://api.github.com/repos/huggingface/datasets/issues/7161/events
https://github.com/huggingface/datasets/issues/7161
2,541,971,931
I_kwDODunzps6Xg2nb
7,161
JSON lines with empty struct raise ArrowTypeError
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-09-23T08:48:56Z
2024-09-25T04:43:44Z
2024-09-23T11:30:07Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
JSON lines with empty struct raise ArrowTypeError: struct fields don't match or are in the wrong order See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5 > ArrowTypeError: struct fields don't match or are in the wrong order: Input fields: struct<> output fields: struct<pov_count: int64, update_count: int64, citation_needed_count: int64> Related to: - #7159
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7161/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7161/timeline
null
completed
null
null
2.686389
521
https://api.github.com/repos/huggingface/datasets/issues/7159
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7159/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7159/comments
https://api.github.com/repos/huggingface/datasets/issues/7159/events
https://github.com/huggingface/datasets/issues/7159
2,541,865,613
I_kwDODunzps6XgcqN
7,159
JSON lines with missing struct fields raise TypeError: Couldn't cast array
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "Hello,\r\n\r\nI have still the same issue when loading the dataset with the new version:\r\n[https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5](https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5)\r\n\r\nI have downloaded and unzipped the wikimedia/structured-wikipedia dataset locally but when loading I have the same issue.\r\n\r\n```\r\nimport datasets\r\n\r\ndataset = datasets.load_dataset(\"/gpfsdsdir/dataset/HuggingFace/wikimedia/structured-wikipedia/20240916.fr\")\r\n```\r\n```\r\nTypeError: Couldn't cast array of type\r\nstruct<content_url: string, width: int64, height: int64, alternative_text: string>\r\nto\r\n{'content_url': Value(dtype='string', id=None), 'width': Value(dtype='int64', id=None), 'height': Value(dtype='int64', id=None)}\r\n\r\nThe above exception was the direct cause of the following exception:\r\n```\r\nMy version of datasets is 3.0.1" ]
2024-09-23T07:57:58Z
2024-10-21T08:07:07Z
2024-09-23T11:09:18Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
JSON lines with missing struct fields raise TypeError: Couldn't cast array of type. See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5 One would expect that the struct missing fields are added with null values.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7159/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7159/timeline
null
completed
null
null
3.188889
523
https://api.github.com/repos/huggingface/datasets/issues/7155
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7155/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7155/comments
https://api.github.com/repos/huggingface/datasets/issues/7155/events
https://github.com/huggingface/datasets/issues/7155
2,533,641,870
I_kwDODunzps6XBE6O
7,155
Dataset viewer not working! Failure due to more than 32 splits.
{ "avatar_url": "https://avatars.githubusercontent.com/u/81933585?v=4", "events_url": "https://api.github.com/users/sleepingcat4/events{/privacy}", "followers_url": "https://api.github.com/users/sleepingcat4/followers", "following_url": "https://api.github.com/users/sleepingcat4/following{/other_user}", "gists_url": "https://api.github.com/users/sleepingcat4/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sleepingcat4", "id": 81933585, "login": "sleepingcat4", "node_id": "MDQ6VXNlcjgxOTMzNTg1", "organizations_url": "https://api.github.com/users/sleepingcat4/orgs", "received_events_url": "https://api.github.com/users/sleepingcat4/received_events", "repos_url": "https://api.github.com/users/sleepingcat4/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sleepingcat4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sleepingcat4/subscriptions", "type": "User", "url": "https://api.github.com/users/sleepingcat4", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "I have fixed it! But I would appreciate a new feature wheere I could iterate over and see what each file looks like. " ]
2024-09-18T12:43:21Z
2024-09-18T13:20:03Z
2024-09-18T13:20:03Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Hello guys, I have a dataset and I didn't know I couldn't upload more than 32 splits. Now, my dataset viewer is not working. I don't have the dataset locally on my node anymore and recreating would take a week. And I have to publish the dataset coming Monday. I read about the practice, how I can resolve it and avoid this issue in the future. But, at the moment I need a hard fix for two of my datasets. And I don't want to mess or change anything and allow everyone in public to see the dataset and interact with it. Can you please help me? https://huggingface.co/datasets/laion/Wikipedia-X https://huggingface.co/datasets/laion/Wikipedia-X-Full
{ "avatar_url": "https://avatars.githubusercontent.com/u/81933585?v=4", "events_url": "https://api.github.com/users/sleepingcat4/events{/privacy}", "followers_url": "https://api.github.com/users/sleepingcat4/followers", "following_url": "https://api.github.com/users/sleepingcat4/following{/other_user}", "gists_url": "https://api.github.com/users/sleepingcat4/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sleepingcat4", "id": 81933585, "login": "sleepingcat4", "node_id": "MDQ6VXNlcjgxOTMzNTg1", "organizations_url": "https://api.github.com/users/sleepingcat4/orgs", "received_events_url": "https://api.github.com/users/sleepingcat4/received_events", "repos_url": "https://api.github.com/users/sleepingcat4/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sleepingcat4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sleepingcat4/subscriptions", "type": "User", "url": "https://api.github.com/users/sleepingcat4", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7155/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7155/timeline
null
completed
null
null
0.611667
527
https://api.github.com/repos/huggingface/datasets/issues/7153
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7153/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7153/comments
https://api.github.com/repos/huggingface/datasets/issues/7153/events
https://github.com/huggingface/datasets/issues/7153
2,532,788,555
I_kwDODunzps6W90lL
7,153
Support data files with .ndjson extension
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-09-18T05:54:45Z
2024-09-19T11:25:15Z
2024-09-19T11:25:15Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request Support data files with `.ndjson` extension. ### Motivation We already support data files with `.jsonl` extension. ### Your contribution I am opening a PR.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7153/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7153/timeline
null
completed
null
null
29.508333
529
https://api.github.com/repos/huggingface/datasets/issues/7150
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7150/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7150/comments
https://api.github.com/repos/huggingface/datasets/issues/7150/events
https://github.com/huggingface/datasets/issues/7150
2,527,571,175
I_kwDODunzps6Wp6zn
7,150
WebDataset loader splits keys differently than WebDataset library
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-09-16T06:02:47Z
2024-09-16T15:26:35Z
2024-09-16T15:26:35Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
As reported by @ragavsachdeva (see discussion here: https://github.com/huggingface/datasets/pull/7144#issuecomment-2348307792), our webdataset loader is not aligned with the `webdataset` library when splitting keys from filenames. For example, we get a different key splitting for filename `/some/path/22.0/1.1.png`: - datasets library: `/some/path/22` and `0/1.1.png` - webdataset library: `/some/path/22.0/1`, `1.png` ```python import webdataset as wds wds.tariterators.base_plus_ext("/some/path/22.0/1.1.png") # ('/some/path/22.0/1', '1.png') ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7150/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7150/timeline
null
completed
null
null
9.396667
531
https://api.github.com/repos/huggingface/datasets/issues/7149
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7149/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7149/comments
https://api.github.com/repos/huggingface/datasets/issues/7149/events
https://github.com/huggingface/datasets/issues/7149
2,524,497,448
I_kwDODunzps6WeMYo
7,149
Datasets Unknown Keyword Argument Error - task_templates
{ "avatar_url": "https://avatars.githubusercontent.com/u/51288316?v=4", "events_url": "https://api.github.com/users/varungupta31/events{/privacy}", "followers_url": "https://api.github.com/users/varungupta31/followers", "following_url": "https://api.github.com/users/varungupta31/following{/other_user}", "gists_url": "https://api.github.com/users/varungupta31/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/varungupta31", "id": 51288316, "login": "varungupta31", "node_id": "MDQ6VXNlcjUxMjg4MzE2", "organizations_url": "https://api.github.com/users/varungupta31/orgs", "received_events_url": "https://api.github.com/users/varungupta31/received_events", "repos_url": "https://api.github.com/users/varungupta31/repos", "site_admin": false, "starred_url": "https://api.github.com/users/varungupta31/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/varungupta31/subscriptions", "type": "User", "url": "https://api.github.com/users/varungupta31", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "Thanks, for reporting.\r\n\r\nWe have been fixing most Hub datasets to remove the deprecated (and now non-supported) task templates, but we missed the \"facebook/winoground\".\r\n\r\nIt is fixed now: https://huggingface.co/datasets/facebook/winoground/discussions/8\r\n\r\n", "Hello @albertvillanova \r\n\r\nI got the same error while loading this dataset: https://huggingface.co/datasets/alaleye/aloresb...\r\n\r\nHow can I fix it ? \r\nThanks", "I am getting the same error on the below code, any fix to this ?\n\n```\nfrom datasets import load_dataset\n\nminds = load_dataset(\"PolyAI/minds14\", name=\"en-AU\", split=\"train\")\nminds\n```" ]
2024-09-13T10:30:57Z
2025-03-06T07:11:55Z
2024-09-13T14:10:48Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Issue ```python from datasets import load_dataset examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>) ``` Gives error ``` TypeError: DatasetInfo.__init__() got an unexpected keyword argument 'task_templates' ``` A simple downgrade to lower `datasets v 2.21.0` solves it. ### Steps to reproduce the bug 1. `pip install datsets` 2. ```python from datasets import load_dataset examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>) ``` ### Expected behavior Should load the dataset correctly. ### Environment info - Datasets version `3.0.0` - `transformers` version: 4.45.0.dev0 - Platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35 - Python version: 3.12.4 - Huggingface_hub version: 0.24.6 - Safetensors version: 0.4.5 - Accelerate version: 0.35.0.dev0 - Accelerate config: not found - PyTorch version (GPU?): 2.4.1+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7149/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7149/timeline
null
completed
null
null
3.664167
532
https://api.github.com/repos/huggingface/datasets/issues/7148
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7148/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7148/comments
https://api.github.com/repos/huggingface/datasets/issues/7148/events
https://github.com/huggingface/datasets/issues/7148
2,523,833,413
I_kwDODunzps6WbqRF
7,148
Bug: Error when downloading mteb/mtop_domain
{ "avatar_url": "https://avatars.githubusercontent.com/u/77958037?v=4", "events_url": "https://api.github.com/users/ZiyiXia/events{/privacy}", "followers_url": "https://api.github.com/users/ZiyiXia/followers", "following_url": "https://api.github.com/users/ZiyiXia/following{/other_user}", "gists_url": "https://api.github.com/users/ZiyiXia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ZiyiXia", "id": 77958037, "login": "ZiyiXia", "node_id": "MDQ6VXNlcjc3OTU4MDM3", "organizations_url": "https://api.github.com/users/ZiyiXia/orgs", "received_events_url": "https://api.github.com/users/ZiyiXia/received_events", "repos_url": "https://api.github.com/users/ZiyiXia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ZiyiXia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZiyiXia/subscriptions", "type": "User", "url": "https://api.github.com/users/ZiyiXia", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Could you please try with `force_redownload` instead?\r\nEDIT:\r\n```python\r\ndata = load_dataset(\"mteb/mtop_domain\", \"en\", download_mode=\"force_redownload\")\r\n```", "Seems the error is still there", "I am not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: data = load_dataset(\"mteb/mtop_domain\", \"en\")\r\n\r\nIn [3]: data\r\nOut[3]: DatasetDict({\r\n train: Dataset({\r\n features: ['id', 'text', 'label', 'label_text'],\r\n num_rows: 15667\r\n })\r\n validation: Dataset({\r\n features: ['id', 'text', 'label', 'label_text'],\r\n num_rows: 2235\r\n })\r\n test: Dataset({\r\n features: ['id', 'text', 'label', 'label_text'],\r\n num_rows: 4386\r\n })\r\n})\r\n```", "Just solved this by reinstall Huggingface Hub and datasets. Thanks for your help!" ]
2024-09-13T04:09:39Z
2024-09-14T15:11:35Z
2024-09-14T15:11:35Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug When downloading the dataset "mteb/mtop_domain", ran into the following error: ``` Traceback (most recent call last): File "/share/project/xzy/test/test_download.py", line 3, in <module> data = load_dataset("mteb/mtop_domain", "en", trust_remote_code=True) File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2606, in load_dataset builder_instance = load_dataset_builder( File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2277, in load_dataset_builder dataset_module = dataset_module_factory( File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1923, in dataset_module_factory raise e1 from None File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1896, in dataset_module_factory ).get_module() File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1507, in get_module local_path = self.download_loading_script() File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1467, in download_loading_script return cached_path(file_path, download_config=download_config) File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 211, in cached_path output_path = get_from_cache( File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 689, in get_from_cache fsspec_get( File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 395, in fsspec_get fs.get_file(path, temp_file.name, callback=callback) File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 648, in get_file http_get( File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 578, in http_get raise EnvironmentError( OSError: Consistency check failed: file should be of size 2191 but has size 2190 ((…)ets/mteb/mtop_domain@main/mtop_domain.py). We are sorry for the inconvenience. Please retry with `force_download=True`. If the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface_hub. ``` Try to download through HF datasets directly but got the same error as above. ```python from datasets import load_dataset data = load_dataset("mteb/mtop_domain", "en") ``` ### Steps to reproduce the bug ```python from datasets import load_dataset data = load_dataset("mteb/mtop_domain", "en", force_download=True) ``` With and without `force_download=True` both ran into the same error. ### Expected behavior Should download the dataset successfully. ### Environment info - datasets version: 2.21.0 - huggingface-hub version: 0.24.6
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7148/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7148/timeline
null
completed
null
null
35.032222
533
https://api.github.com/repos/huggingface/datasets/issues/7147
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7147/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7147/comments
https://api.github.com/repos/huggingface/datasets/issues/7147/events
https://github.com/huggingface/datasets/issues/7147
2,523,129,465
I_kwDODunzps6WY-Z5
7,147
IterableDataset strange deadlock
{ "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanasdf", "id": 511073, "login": "jonathanasdf", "node_id": "MDQ6VXNlcjUxMTA3Mw==", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanasdf", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Yes `interleave_datasets` seems to have an issue with shuffling, could you open a new issue on this ?\r\n\r\nThen regarding the deadlock, it has to do with interleave_dataset with probabilities=[1, 0] with workers that may contain an empty dataset in first position (it can be empty since you distribute 1024 shard to 8 workers, so some workers may not have an example that satisfies your condition `if shard < 25`). It creates an infinite loop, trying to get samples from empty datasets with probability 1.", "Opened https://github.com/huggingface/datasets/issues/7156\r\n\r\nCan the deadlock be fixed somehow? The point of IterableDataset is so we don't need to preload the entire dataset, which loses some meaning if we need to see how many examples are in the dataset in order to set shards correctly.", "~~And it is kinda strange that `Commenting out the final shuffle avoids the issue` since if the infinite loop is inside interleave_datasets you'd expect that to happen regardless of the additional shuffle call?~~\r\n\r\nEdit: oh I guess without the shuffle it's guaranteed every worker gets something, but the shuffle makes it so some workers could have nothing\r\n\r\n~~Edit2: maybe the shuffle can be changed so initially it gives one example to each worker, and only starts the random shuffle after that~~ wait it's not about the workers not getting any shards, it's about a worker getting shards but all of the shards it gets are empty shards\r\n\r\nEdit3: If it's trying to get samples from empty datasets, it should be getting back a StopIteration -- and \"all_exhausted\" should mean it eventually discovers all its datasets are empty, and then it should just raise a StopIteration itself. So it seems like there is a reasonable behavior result for this?", "well the second dataset passed to interleave_datasets is never exhausted, since it's never sampled. But we could also state that the stream of examples from the second dataset is empty if it has probability 0, so I opened https://github.com/huggingface/datasets/pull/7157 to fix the infinite loop issue by ignoring datasets with probability 0, let me know what you think !", "Thanks for taking a look!\r\n\r\nI think you're right that this is ultimately an issue that the user opts into by specifying a dataset with probability 0, because the user is basically saying \"I want to force this `interleave_datasets` call to run forever\" and yet one of the workers can end up having only empty shards to mix...\r\n\r\nThat said it's probably not a good idea to randomly change the behavior of `interleave_datasets` with probability 0, I can't be the only one that uses it to repeat many different datasets (since there is no `datasets.repeat()` function). https://xkcd.com/1172/\r\n\r\nI think just the knowledge that filtering out probability 0 datasets fixes the deadlock is good enough for me. I can filter it out on my side and add a restart loop around the dataloader instead.\r\n\r\nThanks again for investigating.", "Ok I see ! We can also add .repeat() as well" ]
2024-09-12T18:59:33Z
2024-09-23T09:32:27Z
2024-09-21T17:37:34Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug ``` import datasets import torch.utils.data num_shards = 1024 def gen(shards): for shard in shards: if shard < 25: yield {"shard": shard} def main(): dataset = datasets.IterableDataset.from_generator( gen, gen_kwargs={"shards": list(range(num_shards))}, ) dataset = dataset.shuffle(buffer_size=1) dataset = datasets.interleave_datasets( [dataset, dataset], probabilities=[1, 0], stopping_strategy="all_exhausted" ) dataset = dataset.shuffle(buffer_size=1) dataloader = torch.utils.data.DataLoader( dataset, batch_size=8, num_workers=8, ) for i, batch in enumerate(dataloader): print(batch) if i >= 10: break print() if __name__ == "__main__": for _ in range(100): main() ``` ### Steps to reproduce the bug Running the script above, at some point it will freeze. - Changing `num_shards` from 1024 to 25 avoids the issue - Commenting out the final shuffle avoids the issue - Commenting out the interleave_datasets call avoids the issue As an aside, if you comment out just the final shuffle, the output from interleave_datasets is not shuffled at all even though there's the shuffle before it. So something about that shuffle config is not being propagated to interleave_datasets. ### Expected behavior The script should not freeze. ### Environment info - `datasets` version: 3.0.0 - Platform: macOS-14.6.1-arm64-arm-64bit - Python version: 3.12.5 - `huggingface_hub` version: 0.24.7 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.6.1 I observed this with 2.21.0 initially, then tried upgrading to 3.0.0 and could still repro.
{ "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonathanasdf", "id": 511073, "login": "jonathanasdf", "node_id": "MDQ6VXNlcjUxMTA3Mw==", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "type": "User", "url": "https://api.github.com/users/jonathanasdf", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7147/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7147/timeline
null
completed
null
null
214.633611
534
https://api.github.com/repos/huggingface/datasets/issues/7142
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7142/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7142/comments
https://api.github.com/repos/huggingface/datasets/issues/7142/events
https://github.com/huggingface/datasets/issues/7142
2,512,244,938
I_kwDODunzps6VvdDK
7,142
Specifying datatype when adding a column to a dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/20443618?v=4", "events_url": "https://api.github.com/users/varadhbhatnagar/events{/privacy}", "followers_url": "https://api.github.com/users/varadhbhatnagar/followers", "following_url": "https://api.github.com/users/varadhbhatnagar/following{/other_user}", "gists_url": "https://api.github.com/users/varadhbhatnagar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/varadhbhatnagar", "id": 20443618, "login": "varadhbhatnagar", "node_id": "MDQ6VXNlcjIwNDQzNjE4", "organizations_url": "https://api.github.com/users/varadhbhatnagar/orgs", "received_events_url": "https://api.github.com/users/varadhbhatnagar/received_events", "repos_url": "https://api.github.com/users/varadhbhatnagar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/varadhbhatnagar/subscriptions", "type": "User", "url": "https://api.github.com/users/varadhbhatnagar", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "#self-assign" ]
2024-09-08T07:34:24Z
2024-09-17T03:46:32Z
2024-09-17T03:46:32Z
CONTRIBUTOR
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Feature request There should be a way to specify the datatype of a column in `datasets.add_column()`. ### Motivation To specify a custom datatype, we have to use `datasets.add_column()` followed by `datasets.cast_column()` which is slow for large datasets. Another workaround is to pass a `numpy.array()` of desired type to the `datasets.add_column()` function. IMO this functionality should be natively supported. https://discuss.huggingface.co/t/add-column-with-a-particular-type-in-datasets/95674 ### Your contribution I can submit a PR for this.
{ "avatar_url": "https://avatars.githubusercontent.com/u/20443618?v=4", "events_url": "https://api.github.com/users/varadhbhatnagar/events{/privacy}", "followers_url": "https://api.github.com/users/varadhbhatnagar/followers", "following_url": "https://api.github.com/users/varadhbhatnagar/following{/other_user}", "gists_url": "https://api.github.com/users/varadhbhatnagar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/varadhbhatnagar", "id": 20443618, "login": "varadhbhatnagar", "node_id": "MDQ6VXNlcjIwNDQzNjE4", "organizations_url": "https://api.github.com/users/varadhbhatnagar/orgs", "received_events_url": "https://api.github.com/users/varadhbhatnagar/received_events", "repos_url": "https://api.github.com/users/varadhbhatnagar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/varadhbhatnagar/subscriptions", "type": "User", "url": "https://api.github.com/users/varadhbhatnagar", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7142/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7142/timeline
null
completed
null
null
212.202222
539
https://api.github.com/repos/huggingface/datasets/issues/7141
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7141/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7141/comments
https://api.github.com/repos/huggingface/datasets/issues/7141/events
https://github.com/huggingface/datasets/issues/7141
2,510,797,653
I_kwDODunzps6Vp7tV
7,141
Older datasets throwing safety errors with 2.21.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/1050316?v=4", "events_url": "https://api.github.com/users/alvations/events{/privacy}", "followers_url": "https://api.github.com/users/alvations/followers", "following_url": "https://api.github.com/users/alvations/following{/other_user}", "gists_url": "https://api.github.com/users/alvations/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvations", "id": 1050316, "login": "alvations", "node_id": "MDQ6VXNlcjEwNTAzMTY=", "organizations_url": "https://api.github.com/users/alvations/orgs", "received_events_url": "https://api.github.com/users/alvations/received_events", "repos_url": "https://api.github.com/users/alvations/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvations/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvations/subscriptions", "type": "User", "url": "https://api.github.com/users/alvations", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "I am also getting this error with this dataset: https://huggingface.co/datasets/google/IFEval", "Me too, didn't have this issue few hours ago.", "same observation. I even downgraded `datasets==2.20.0` and `huggingface_hub==0.23.5` leading me to believe it's an issue on the server.\r\n\r\nany known workarounds?\r\n", "Not a good idea, but commenting out the whole security block at `/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py` is a temporary workaround:\r\n\r\n```\r\n #security = kwargs.pop(\"security\", None)\r\n #if security is not None:\r\n # security = BlobSecurityInfo(\r\n # safe=security[\"safe\"], av_scan=security[\"avScan\"], pickle_import_scan=security[\"pickleImportScan\"]\r\n # )\r\n #self.security = security\r\n```\r\n", "Uploading a dataset to Huggingface also results in the following error in the Dataset Preview:\r\n```\r\nThe full dataset viewer is not available (click to read why). Only showing a preview of the rows.\r\n'safe'\r\nError code: UnexpectedError\r\nNeed help to make the dataset viewer work? Make sure to review [how to configure the dataset viewer](link1), and [open a discussion](link2) for direct support.\r\n```\r\nI used jsonl format for the dataset in this case. Same exact dataset worked previously.", "Same issue here. Even reverting to older version of `datasets` (e.g., `2.19.0`) results in same error:\r\n\r\n```python\r\n>>> datasets.load_dataset('allenai/ai2_arc', 'ARC-Easy')\r\n\r\nFile \"/Users/lucas/miniforge3/envs/oe-eval-internal/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 3048, in <listcomp>\r\n RepoFile(**path_info) if path_info[\"type\"] == \"file\" else RepoFolder(**path_info)\r\n File \"/Users/lucas/miniforge3/envs/oe-eval-internal/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 534, in __init__\r\n safe=security[\"safe\"], av_scan=security[\"avScan\"], pickle_import_scan=security[\"pickleImportScan\"]\r\nKeyError: 'safe'\r\n```", "i just had this issue a few minutes ago, crawled the internet and found nothing. came here to open an issue and found this. it is really frustrating. anyone found a fix?", "hi, me and my team have the same problem", "Yeah, this just suddenly appeared without client-side code changes, within the last hours.\r\n\r\nHere's a patch to fix the issue temporarily:\r\n```python\r\nimport huggingface_hub\r\ndef patched_repofolder_init(self, **kwargs):\r\n self.path = kwargs.pop(\"path\")\r\n self.tree_id = kwargs.pop(\"oid\")\r\n last_commit = kwargs.pop(\"lastCommit\", None) or kwargs.pop(\"last_commit\", None)\r\n if last_commit is not None:\r\n last_commit = huggingface_hub.hf_api.LastCommitInfo(\r\n oid=last_commit[\"id\"],\r\n title=last_commit[\"title\"],\r\n date=huggingface_hub.utils.parse_datetime(last_commit[\"date\"]),\r\n )\r\n self.last_commit = last_commit\r\n\r\n\r\ndef patched_repo_file_init(self, **kwargs):\r\n self.path = kwargs.pop(\"path\")\r\n self.size = kwargs.pop(\"size\")\r\n self.blob_id = kwargs.pop(\"oid\")\r\n lfs = kwargs.pop(\"lfs\", None)\r\n if lfs is not None:\r\n lfs = huggingface_hub.hf_api.BlobLfsInfo(size=lfs[\"size\"], sha256=lfs[\"oid\"], pointer_size=lfs[\"pointerSize\"])\r\n self.lfs = lfs\r\n last_commit = kwargs.pop(\"lastCommit\", None) or kwargs.pop(\"last_commit\", None)\r\n if last_commit is not None:\r\n last_commit = huggingface_hub.hf_api.LastCommitInfo(\r\n oid=last_commit[\"id\"],\r\n title=last_commit[\"title\"],\r\n date=huggingface_hub.utils.parse_datetime(last_commit[\"date\"]),\r\n )\r\n self.last_commit = last_commit\r\n self.security = None\r\n\r\n # backwards compatibility\r\n self.rfilename = self.path\r\n self.lastCommit = self.last_commit\r\n\r\n\r\nhuggingface_hub.hf_api.RepoFile.__init__ = patched_repo_file_init\r\nhuggingface_hub.hf_api.RepoFolder.__init__ = patched_repofolder_init\r\n```\r\n", "Also discussed here:\r\nhttps://discuss.huggingface.co/t/i-keep-getting-keyerror-safe-when-loading-my-datasets/105669/1", "i'm thinking this should be a server issue, i mean no client code was changed on my end. so weird!", "As far as I can tell, this seems to be happening with **all** datasets that use RepoFolder (probably represents most datasets on huggingface, right?)", "> Here is a temporary fix for the problem: https://discuss.huggingface.co/t/i-keep-getting-keyerror-safe-when-loading-my-datasets/105669/12?u=mlscientist\r\n\r\nthis doesn't seem to work!", "In case you are using Colab or similar, remember to restart your session after modyfing the hf_api.py file", "No need to modify the file directly, just monkey-patch.\r\n\r\nI'm now more sure that the error appears because the backend expects the api code to look like it does on `main`. If `RepoFile` and `RepoFolder` look about like they look on main, they work again.\r\n\r\nIf not fixed like above, a secondary error that will appear is \r\n```\r\n return self.info(path, expand_info=False)[\"type\"] == \"directory\"\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n \"tree_id\": path_info.tree_id,\r\n ^^^^^^^^^^^^^^^^^\r\nAttributeError: 'RepoFolder' object has no attribute 'tree_id'\r\n```\r\n", "We've reverted the deployment, please let us know if the issue still persists!", "thanks @muellerzr!" ]
2024-09-06T16:26:30Z
2024-09-06T21:14:14Z
2024-09-06T19:09:29Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug The dataset loading was throwing some safety errors for this popular dataset `wmt14`. [in]: ``` import datasets # train_data = datasets.load_dataset("wmt14", "de-en", split="train") train_data = datasets.load_dataset("wmt14", "de-en", split="train") val_data = datasets.load_dataset("wmt14", "de-en", split="validation[:10%]") ``` [out]: ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [<ipython-input-9-445f0ecc4817>](https://localhost:8080/#) in <cell line: 4>() 2 3 # train_data = datasets.load_dataset("wmt14", "de-en", split="train") ----> 4 train_data = datasets.load_dataset("wmt14", "de-en", split="train") 5 val_data = datasets.load_dataset("wmt14", "de-en", split="validation[:10%]") 12 frames [/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py](https://localhost:8080/#) in __init__(self, **kwargs) 636 if security is not None: 637 security = BlobSecurityInfo( --> 638 safe=security["safe"], av_scan=security["avScan"], pickle_import_scan=security["pickleImportScan"] 639 ) 640 self.security = security KeyError: 'safe' ``` ### Steps to reproduce the bug See above. ### Expected behavior Dataset properly loaded. ### Environment info version: 2.21.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/muellerzr", "id": 7831895, "login": "muellerzr", "node_id": "MDQ6VXNlcjc4MzE4OTU=", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "repos_url": "https://api.github.com/users/muellerzr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "type": "User", "url": "https://api.github.com/users/muellerzr", "user_view_type": "public" }
{ "+1": 26, "-1": 0, "confused": 3, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 29, "url": "https://api.github.com/repos/huggingface/datasets/issues/7141/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7141/timeline
null
completed
null
null
2.716389
540
https://api.github.com/repos/huggingface/datasets/issues/7129
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7129/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7129/comments
https://api.github.com/repos/huggingface/datasets/issues/7129/events
https://github.com/huggingface/datasets/issues/7129
2,491,942,650
I_kwDODunzps6UiAb6
7,129
Inconsistent output in documentation example: `num_classes` not displayed in `ClassLabel` output
{ "avatar_url": "https://avatars.githubusercontent.com/u/17179696?v=4", "events_url": "https://api.github.com/users/sergiopaniego/events{/privacy}", "followers_url": "https://api.github.com/users/sergiopaniego/followers", "following_url": "https://api.github.com/users/sergiopaniego/following{/other_user}", "gists_url": "https://api.github.com/users/sergiopaniego/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sergiopaniego", "id": 17179696, "login": "sergiopaniego", "node_id": "MDQ6VXNlcjE3MTc5Njk2", "organizations_url": "https://api.github.com/users/sergiopaniego/orgs", "received_events_url": "https://api.github.com/users/sergiopaniego/received_events", "repos_url": "https://api.github.com/users/sergiopaniego/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sergiopaniego/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sergiopaniego/subscriptions", "type": "User", "url": "https://api.github.com/users/sergiopaniego", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2024-08-28T12:27:48Z
2024-12-06T11:32:02Z
2024-12-06T11:32:02Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
In the documentation for [ClassLabel](https://huggingface.co/docs/datasets/v2.21.0/en/package_reference/main_classes#datasets.ClassLabel), there is an example of usage with the following code: ```` from datasets import Features features = Features({'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'])}) features ```` which expects to output (as stated in the documentation): ```` {'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'], id=None)} ```` but it generates the following ```` {'label': ClassLabel(names=['bad', 'ok', 'good'], id=None)} ```` If my understanding is correct, this happens because although num_classes is used during the init of the object, it is afterward ignored: https://github.com/huggingface/datasets/blob/be5cff059a2a5b89d7a97bc04739c4919ab8089f/src/datasets/features/features.py#L975 I would like to work on this issue if this is something needed 😄
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7129/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7129/timeline
null
completed
null
null
2,399.070556
549
https://api.github.com/repos/huggingface/datasets/issues/7116
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7116/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7116/comments
https://api.github.com/repos/huggingface/datasets/issues/7116/events
https://github.com/huggingface/datasets/issues/7116
2,475,522,721
I_kwDODunzps6TjXqh
7,116
datasets cannot handle nested json if features is given.
{ "avatar_url": "https://avatars.githubusercontent.com/u/38550511?v=4", "events_url": "https://api.github.com/users/ljw20180420/events{/privacy}", "followers_url": "https://api.github.com/users/ljw20180420/followers", "following_url": "https://api.github.com/users/ljw20180420/following{/other_user}", "gists_url": "https://api.github.com/users/ljw20180420/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ljw20180420", "id": 38550511, "login": "ljw20180420", "node_id": "MDQ6VXNlcjM4NTUwNTEx", "organizations_url": "https://api.github.com/users/ljw20180420/orgs", "received_events_url": "https://api.github.com/users/ljw20180420/received_events", "repos_url": "https://api.github.com/users/ljw20180420/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ljw20180420/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ljw20180420/subscriptions", "type": "User", "url": "https://api.github.com/users/ljw20180420", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! `Sequence` has a weird behavior for dictionaries (from tensorflow-datasets), use a regular list instead:\r\n\r\n```python\r\nds = datasets.load_dataset('json', data_files=\"./temp.json\", features=datasets.Features({\r\n 'ref1': datasets.Value('string'),\r\n 'ref2': datasets.Value('string'),\r\n 'cuts': [{\r\n \"cut1\": datasets.Value(\"uint16\"),\r\n \"cut2\": datasets.Value(\"uint16\")\r\n }]\r\n}))\r\n```", "> Hi ! `Sequence` has a weird behavior for dictionaries (from tensorflow-datasets), use a regular list instead:\r\n> \r\n> ```python\r\n> ds = datasets.load_dataset('json', data_files=\"./temp.json\", features=datasets.Features({\r\n> 'ref1': datasets.Value('string'),\r\n> 'ref2': datasets.Value('string'),\r\n> 'cuts': [{\r\n> \"cut1\": datasets.Value(\"uint16\"),\r\n> \"cut2\": datasets.Value(\"uint16\")\r\n> }]\r\n> }))\r\n> ```\r\nThank you!\r\n", "It works." ]
2024-08-20T12:27:49Z
2024-09-03T10:18:23Z
2024-09-03T10:18:07Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug I have a json named temp.json. ```json {"ref1": "ABC", "ref2": "DEF", "cuts":[{"cut1": 3, "cut2": 5}]} ``` I want to load it. ```python ds = datasets.load_dataset('json', data_files="./temp.json", features=datasets.Features({ 'ref1': datasets.Value('string'), 'ref2': datasets.Value('string'), 'cuts': datasets.Sequence({ "cut1": datasets.Value("uint16"), "cut2": datasets.Value("uint16") }) })) ``` The above code does not work. However, I can load it without giving features. ```python ds = datasets.load_dataset('json', data_files="./temp.json") ``` Is it possible to load integers as uint16 to save some memory? ### Steps to reproduce the bug As in the bug description. ### Expected behavior The data are loaded and integers are uint16. ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.21.0 - Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35 - Python version: 3.11.9 - `huggingface_hub` version: 0.24.5 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/38550511?v=4", "events_url": "https://api.github.com/users/ljw20180420/events{/privacy}", "followers_url": "https://api.github.com/users/ljw20180420/followers", "following_url": "https://api.github.com/users/ljw20180420/following{/other_user}", "gists_url": "https://api.github.com/users/ljw20180420/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ljw20180420", "id": 38550511, "login": "ljw20180420", "node_id": "MDQ6VXNlcjM4NTUwNTEx", "organizations_url": "https://api.github.com/users/ljw20180420/orgs", "received_events_url": "https://api.github.com/users/ljw20180420/received_events", "repos_url": "https://api.github.com/users/ljw20180420/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ljw20180420/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ljw20180420/subscriptions", "type": "User", "url": "https://api.github.com/users/ljw20180420", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7116/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7116/timeline
null
completed
null
null
333.838333
562
https://api.github.com/repos/huggingface/datasets/issues/7115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7115/comments
https://api.github.com/repos/huggingface/datasets/issues/7115/events
https://github.com/huggingface/datasets/issues/7115
2,475,363,142
I_kwDODunzps6TiwtG
7,115
module 'pyarrow.lib' has no attribute 'ListViewType'
{ "avatar_url": "https://avatars.githubusercontent.com/u/175128880?v=4", "events_url": "https://api.github.com/users/neurafusionai/events{/privacy}", "followers_url": "https://api.github.com/users/neurafusionai/followers", "following_url": "https://api.github.com/users/neurafusionai/following{/other_user}", "gists_url": "https://api.github.com/users/neurafusionai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/neurafusionai", "id": 175128880, "login": "neurafusionai", "node_id": "U_kgDOCnBBMA", "organizations_url": "https://api.github.com/users/neurafusionai/orgs", "received_events_url": "https://api.github.com/users/neurafusionai/received_events", "repos_url": "https://api.github.com/users/neurafusionai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/neurafusionai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neurafusionai/subscriptions", "type": "User", "url": "https://api.github.com/users/neurafusionai", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "https://github.com/neurafusionai/Hugging_Face/blob/main/meta_opt_350m_customer_support_lora_v1.ipynb\r\n\r\ncouldnt train because of GPU\r\nI didnt pip install datasets -U\r\nbut looks like restarting worked" ]
2024-08-20T11:05:44Z
2024-09-10T06:51:08Z
2024-09-10T06:51:08Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Code: `!pipuninstall -y pyarrow !pip install --no-cache-dir pyarrow !pip uninstall -y pyarrow !pip install pyarrow --no-cache-dir !pip install --upgrade datasets transformers pyarrow !pip install pyarrow.parquet ! pip install pyarrow-core libparquet !pip install pyarrow --no-cache-dir !pip install pyarrow !pip install transformers !pip install --upgrade datasets !pip install datasets ! pip install pyarrow ! pip install pyarrow.lib ! pip install pyarrow.parquet !pip install transformers import pyarrow as pa print(pa.__version__) from datasets import load_dataset import pyarrow.parquet as pq import pyarrow.lib as lib import pandas as pd from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments from datasets import load_dataset from transformers import AutoTokenizer ! pip install pyarrow-core libparquet # Load the dataset for content moderation dataset = load_dataset("PolyAI/banking77") # Example dataset for customer support # Initialize the tokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") # Tokenize the dataset def tokenize_function(examples): return tokenizer(examples['text'], padding="max_length", truncation=True) # Apply tokenization to the entire dataset tokenized_datasets = dataset.map(tokenize_function, batched=True) # Check the first few tokenized samples print(tokenized_datasets['train'][0]) from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments # Load the model model = AutoModelForSequenceClassification.from_pretrained("facebook/opt-350m", num_labels=77) # Define training arguments training_args = TrainingArguments( output_dir="./results", per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=3, eval_strategy="epoch", # save_strategy="epoch", logging_dir="./logs", learning_rate=2e-5, ) # Initialize the Trainer trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["test"], ) # Train the model trainer.train() # Evaluate the model trainer.evaluate() ` AttributeError Traceback (most recent call last) [<ipython-input-23-60bed3143a93>](https://localhost:8080/#) in <cell line: 22>() 20 21 ---> 22 from datasets import load_dataset 23 import pyarrow.parquet as pq 24 import pyarrow.lib as lib 5 frames [/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module> 15 __version__ = "2.21.0" 16 ---> 17 from .arrow_dataset import Dataset 18 from .arrow_reader import ReadInstruction 19 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module> 74 75 from . import config ---> 76 from .arrow_reader import ArrowReader 77 from .arrow_writer import ArrowWriter, OptimizedTypedSequence 78 from .data_files import sanitize_patterns [/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module> 27 28 import pyarrow as pa ---> 29 import pyarrow.parquet as pq 30 from tqdm.contrib.concurrent import thread_map 31 [/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/__init__.py](https://localhost:8080/#) in <module> 18 # flake8: noqa 19 ---> 20 from .core import * [/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py](https://localhost:8080/#) in <module> 31 32 try: ---> 33 import pyarrow._parquet as _parquet 34 except ImportError as exc: 35 raise ImportError( /usr/local/lib/python3.10/dist-packages/pyarrow/_parquet.pyx in init pyarrow._parquet() AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType' ### Steps to reproduce the bug https://colab.research.google.com/drive/1HNbsg3tHxUJOHVtYIaRnNGY4T2PnLn4a?usp=sharing ### Expected behavior Looks like there is an issue with datasets and pyarrow ### Environment info google colab python huggingface Found existing installation: pyarrow 17.0.0 Uninstalling pyarrow-17.0.0: Successfully uninstalled pyarrow-17.0.0 Collecting pyarrow Downloading pyarrow-17.0.0-cp310-cp310-manylinux_2_28_x86_64.whl.metadata (3.3 kB) Requirement already satisfied: numpy>=1.16.6 in /usr/local/lib/python3.10/dist-packages (from pyarrow) (1.26.4) Downloading pyarrow-17.0.0-cp310-cp310-manylinux_2_28_x86_64.whl (39.9 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 39.9/39.9 MB 188.9 MB/s eta 0:00:00 Installing collected packages: pyarrow ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. cudf-cu12 24.4.1 requires pyarrow<15.0.0a0,>=14.0.1, but you have pyarrow 17.0.0 which is incompatible. ibis-framework 8.0.0 requires pyarrow<16,>=2, but you have pyarrow 17.0.0 which is incompatible. Successfully installed pyarrow-17.0.0 WARNING: The following packages were previously imported in this runtime: [pyarrow] You must restart the runtime in order to use newly installed versions.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7115/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7115/timeline
null
completed
null
null
499.756667
563
https://api.github.com/repos/huggingface/datasets/issues/7113
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7113/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7113/comments
https://api.github.com/repos/huggingface/datasets/issues/7113/events
https://github.com/huggingface/datasets/issues/7113
2,475,029,640
I_kwDODunzps6ThfSI
7,113
Stream dataset does not iterate if the batch size is larger than the dataset size (related to drop_last_batch)
{ "avatar_url": "https://avatars.githubusercontent.com/u/4197249?v=4", "events_url": "https://api.github.com/users/memray/events{/privacy}", "followers_url": "https://api.github.com/users/memray/followers", "following_url": "https://api.github.com/users/memray/following{/other_user}", "gists_url": "https://api.github.com/users/memray/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/memray", "id": 4197249, "login": "memray", "node_id": "MDQ6VXNlcjQxOTcyNDk=", "organizations_url": "https://api.github.com/users/memray/orgs", "received_events_url": "https://api.github.com/users/memray/received_events", "repos_url": "https://api.github.com/users/memray/repos", "site_admin": false, "starred_url": "https://api.github.com/users/memray/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/memray/subscriptions", "type": "User", "url": "https://api.github.com/users/memray", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "That's expected behavior, it's also the same in `torch`:\r\n\r\n```python\r\n>>> list(DataLoader(list(range(5)), batch_size=10, drop_last=True))\r\n[]\r\n```" ]
2024-08-20T08:26:40Z
2024-08-26T04:24:11Z
2024-08-26T04:24:10Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug Hi there, I use streaming and interleaving to combine multiple datasets saved in jsonl files. The size of dataset can vary (from 100ish to 100k-ish). I use dataset.map() and a big batch size to reduce the IO cost. It was working fine with datasets-2.16.1 but this problem shows up after I upgraded to datasets-2.19.2. With 2.21.0 the problem remains. Please see the code below to reproduce the problem. The dataset can iterate correctly if we set either streaming=False or drop_last_batch=False. I have to use drop_last_batch=True since it's for distributed training. ### Steps to reproduce the bug ```python # datasets==2.21.0 import datasets def data_prepare(examples): print(examples["sentence1"][0]) return examples batch_size = 101 # the size of the dataset is 100 # the dataset iterates correctly if we set either streaming=False or drop_last_batch=False dataset = datasets.load_dataset("mteb/biosses-sts", split="test", streaming=True) dataset = dataset.map(lambda x: data_prepare(x), drop_last_batch=True, batched=True, batch_size=batch_size) for ex in dataset: print(ex) pass ``` ### Expected behavior The dataset iterates regardless of the batch size. ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.14 - `huggingface_hub` version: 0.24.5 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.2.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7113/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7113/timeline
null
completed
null
null
139.958333
565
https://api.github.com/repos/huggingface/datasets/issues/7111
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7111/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7111/comments
https://api.github.com/repos/huggingface/datasets/issues/7111/events
https://github.com/huggingface/datasets/issues/7111
2,474,915,845
I_kwDODunzps6ThDgF
7,111
CI is broken for numpy-2: Failed to fetch wheel: llvmlite==0.34.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[ "Note that the CI before was using:\r\n- llvmlite: 0.43.0\r\n- numba: 0.60.0\r\n\r\nNow it tries to use:\r\n- llvmlite: 0.34.0\r\n- numba: 0.51.2", "The issue is because numba-0.60.0 pins numpy<2.1 and `uv` tries to install latest numpy-2.1.0 with an old numba-0.51.0 version (and llvmlite-0.34.0). See discussion in their repo:\r\n- https://github.com/numba/numba/issues/9708\r\n\r\nLatest numpy-2.1.0 will be supported by the next numba-0.61.0 release in September.\r\n\r\nNote that our CI requires numba with the \"audio\" extra:\r\n- librosa > numba" ]
2024-08-20T07:27:28Z
2024-08-21T05:05:36Z
2024-08-20T09:02:36Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Ci is broken with error `Failed to fetch wheel: llvmlite==0.34.0`: https://github.com/huggingface/datasets/actions/runs/10466825281/job/28984414269 ``` Run uv pip install --system "datasets[tests_numpy2] @ ." Resolved 150 packages in 4.42s error: Failed to prepare distributions Caused by: Failed to fetch wheel: llvmlite==0.34.0 Caused by: Build backend failed to build wheel through `build_wheel()` with exit status: 1 --- stdout: running bdist_wheel /home/runner/.cache/uv/builds-v0/.tmpcyKh8S/bin/python /home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py LLVM version... --- stderr: Traceback (most recent call last): File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 105, in main_posix out = subprocess.check_output([llvm_config, '--version']) File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 421, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 503, in run with Popen(*popenargs, **kwargs) as process: File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 971, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 1863, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'llvm-config' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 191, in <module> main() File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 181, in main main_posix('linux', '.so') File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 107, in main_posix raise RuntimeError("%s failed executing, please point LLVM_CONFIG " RuntimeError: llvm-config failed executing, please point LLVM_CONFIG to the path for llvm-config error: command '/home/runner/.cache/uv/builds-v0/.tmpcyKh8S/bin/python' failed with exit code 1 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7111/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7111/timeline
null
completed
null
null
1.585556
567
https://api.github.com/repos/huggingface/datasets/issues/7109
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7109/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7109/comments
https://api.github.com/repos/huggingface/datasets/issues/7109/events
https://github.com/huggingface/datasets/issues/7109
2,473,367,848
I_kwDODunzps6TbJko
7,109
ConnectionError for gated datasets and unauthenticated users
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" } ]
null
[]
2024-08-19T13:27:45Z
2024-08-20T09:14:36Z
2024-08-20T09:14:35Z
MEMBER
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
Since the Hub returns dataset info for gated datasets and unauthenticated users, there is dead code: https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/load.py#L1846-L1852 We should remove the dead code and properly handle this case: currently we are raising a `ConnectionError` instead of a `DatasetNotFoundError` (as before). See: - https://github.com/huggingface/dataset-viewer/issues/3025 - https://github.com/huggingface/huggingface_hub/issues/2457
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7109/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7109/timeline
null
completed
null
null
19.780556
569
https://api.github.com/repos/huggingface/datasets/issues/7108
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7108/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7108/comments
https://api.github.com/repos/huggingface/datasets/issues/7108/events
https://github.com/huggingface/datasets/issues/7108
2,470,665,327
I_kwDODunzps6TQ1xv
7,108
website broken: Create a new dataset repository, doesn't create a new repo in Firefox
{ "avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4", "events_url": "https://api.github.com/users/neoneye/events{/privacy}", "followers_url": "https://api.github.com/users/neoneye/followers", "following_url": "https://api.github.com/users/neoneye/following{/other_user}", "gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/neoneye", "id": 147971, "login": "neoneye", "node_id": "MDQ6VXNlcjE0Nzk3MQ==", "organizations_url": "https://api.github.com/users/neoneye/orgs", "received_events_url": "https://api.github.com/users/neoneye/received_events", "repos_url": "https://api.github.com/users/neoneye/repos", "site_admin": false, "starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neoneye/subscriptions", "type": "User", "url": "https://api.github.com/users/neoneye", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "I don't reproduce, I was able to create a new repo: https://huggingface.co/datasets/severo/reproduce-datasets-issues-7108. Can you confirm it's still broken?", "I have just tried again.\r\n\r\nFirefox: The `Create dataset` doesn't work. It has worked in the past. It's my preferred browser.\r\n\r\nChrome: The `Create dataset` works.\r\n\r\nIt seems to be a Firefox specific issue.", "I have updated Firefox 129.0 (64 bit), and now the `Create dataset` is working again in Firefox.\r\n\r\nUX: It would be nice with better error messages on HuggingFace.", "maybe an issue with the cookie. cc @Wauplin @coyotte508 " ]
2024-08-16T17:23:00Z
2024-08-19T13:21:12Z
2024-08-19T06:52:48Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug This issue is also reported here: https://discuss.huggingface.co/t/create-a-new-dataset-repository-broken-page/102644 This page is broken. https://huggingface.co/new-dataset I fill in the form with my text, and click `Create Dataset`. ![Screenshot 2024-08-16 at 15 55 37](https://github.com/user-attachments/assets/de16627b-7a55-4bcf-9f0b-a48227aabfe6) Then the form gets wiped. And no repo got created. No error message visible in the developer console. ![Screenshot 2024-08-16 at 15 56 54](https://github.com/user-attachments/assets/0520164b-431c-40a5-9634-11fd62c4f4c3) # Idea for improvement For better UX, if the repo cannot be created, then show an error message, that something went wrong. # Work around, that works for me ```python from huggingface_hub import HfApi, HfFolder repo_id = 'simon-arc-solve-fractal-v3' api = HfApi() username = api.whoami()['name'] repo_url = api.create_repo(repo_id=repo_id, exist_ok=True, private=True, repo_type="dataset") ``` ### Steps to reproduce the bug Go https://huggingface.co/new-dataset Fill in the form. Click `Create dataset`. Now the form is cleared. And the page doesn't jump anywhere. ### Expected behavior The moment the user clicks `Create dataset`, the repo gets created and the page jumps to the created repo. ### Environment info Firefox 128.0.3 (64-bit) macOS Sonoma 14.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4", "events_url": "https://api.github.com/users/neoneye/events{/privacy}", "followers_url": "https://api.github.com/users/neoneye/followers", "following_url": "https://api.github.com/users/neoneye/following{/other_user}", "gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/neoneye", "id": 147971, "login": "neoneye", "node_id": "MDQ6VXNlcjE0Nzk3MQ==", "organizations_url": "https://api.github.com/users/neoneye/orgs", "received_events_url": "https://api.github.com/users/neoneye/received_events", "repos_url": "https://api.github.com/users/neoneye/repos", "site_admin": false, "starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neoneye/subscriptions", "type": "User", "url": "https://api.github.com/users/neoneye", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/7108/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7108/timeline
null
completed
null
null
61.496667
570
https://api.github.com/repos/huggingface/datasets/issues/7107
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7107/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7107/comments
https://api.github.com/repos/huggingface/datasets/issues/7107/events
https://github.com/huggingface/datasets/issues/7107
2,470,444,732
I_kwDODunzps6TP_68
7,107
load_dataset broken in 2.21.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/1911631?v=4", "events_url": "https://api.github.com/users/anjor/events{/privacy}", "followers_url": "https://api.github.com/users/anjor/followers", "following_url": "https://api.github.com/users/anjor/following{/other_user}", "gists_url": "https://api.github.com/users/anjor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anjor", "id": 1911631, "login": "anjor", "node_id": "MDQ6VXNlcjE5MTE2MzE=", "organizations_url": "https://api.github.com/users/anjor/orgs", "received_events_url": "https://api.github.com/users/anjor/received_events", "repos_url": "https://api.github.com/users/anjor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anjor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anjor/subscriptions", "type": "User", "url": "https://api.github.com/users/anjor", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "There seems to be a PR related to the load_dataset path that went into 2.21.0 -- https://github.com/huggingface/datasets/pull/6862/files\r\n\r\nTaking a look at it now", "+1\r\n\r\nDowngrading to 2.20.0 fixed my issue, hopefully helpful for others.", "I tried adding a simple test to `test_load.py` with the alpaca eval dataset but the test didn't fail :(. \r\n\r\nSo looks like this might have something to do with the environment? ", "There was an issue with the script of the \"tatsu-lab/alpaca_eval\" dataset.\r\n\r\nI was fixed with this PR: \r\n- [Fix FileNotFoundError](https://huggingface.co/datasets/tatsu-lab/alpaca_eval/discussions/2)\r\n\r\nIt should work now if you retry to load the dataset." ]
2024-08-16T14:59:51Z
2024-08-18T09:28:43Z
2024-08-18T09:27:12Z
NONE
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
### Describe the bug `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)` used to work till 2.20.0 but doesn't work in 2.21.0 In 2.20.0: ![Screenshot 2024-08-16 at 3 57 10 PM](https://github.com/user-attachments/assets/0516489b-8187-486d-bee8-88af3381dee9) in 2.21.0: ![Screenshot 2024-08-16 at 3 57 24 PM](https://github.com/user-attachments/assets/bc257570-f461-41e4-8717-90a69ed7c24f) ### Steps to reproduce the bug 1. Spin up a new google collab 2. `pip install datasets==2.21.0` 3. `import datasets` 4. `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)` 5. Will throw an error. ### Expected behavior Try steps 1-5 again but replace datasets version with 2.20.0, it will work ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-6.1.85+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.5 - PyArrow version: 17.0.0 - Pandas version: 2.1.4 - `fsspec` version: 2024.5.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/7107/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/7107/timeline
null
completed
null
null
42.455833
571