id
int64 953M
3.35B
| number
int64 2.72k
7.75k
| title
stringlengths 1
290
| state
stringclasses 2
values | created_at
timestamp[s]date 2021-07-26 12:21:17
2025-08-23 00:18:43
| updated_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-23 12:34:39
| closed_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-20 16:35:55
⌀ | html_url
stringlengths 49
51
| pull_request
dict | user_login
stringlengths 3
26
| is_pull_request
bool 2
classes | comments
listlengths 0
30
|
|---|---|---|---|---|---|---|---|---|---|---|---|
3,178,952,517
| 7,647
|
loading mozilla-foundation--common_voice_11_0 fails
|
open
| 2025-06-26T12:23:48
| 2025-07-10T14:49:30
| null |
https://github.com/huggingface/datasets/issues/7647
| null |
pavel-esir
| false
|
[
"@claude Could you please address this issue",
"kinda related: https://github.com/huggingface/datasets/issues/7675"
] |
3,178,036,854
| 7,646
|
Introduces automatic subset-level grouping for folder-based dataset builders #7066
|
open
| 2025-06-26T07:01:37
| 2025-07-14T10:42:56
| null |
https://github.com/huggingface/datasets/pull/7646
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7646",
"html_url": "https://github.com/huggingface/datasets/pull/7646",
"diff_url": "https://github.com/huggingface/datasets/pull/7646.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7646.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[
"It adds automatic grouping of files into subsets based on their root name (e.g., `train0.jsonl`, `train1.jsonl` → `\"train\"`), as discussed above. The logic is integrated into `FolderBasedBuilder` and is fully tested + documented.\r\n\r\nLet me know if any changes are needed — happy to iterate!",
"Hi ! I believe the subsets need to be instantiated here as `configs` - not `splits` (which are meant for train/validation/test):\r\n\r\nhttps://github.com/huggingface/datasets/blob/ef762e664a2a1675368ed7a203b0ac8cecca6e19/src/datasets/load.py#L647-L662\r\n\r\nAlso the subset names should probably be inferred only from the parquet/csv/json files and not from png/jpeg/wav/mp4 etc. WDYT ?",
"> Hi ! I believe the subsets need to be instantiated here as `configs` - not `splits` (which are meant for train/validation/test):\r\n> \r\n> https://github.com/huggingface/datasets/blob/ef762e664a2a1675368ed7a203b0ac8cecca6e19/src/datasets/load.py#L647-L662\r\n> \r\n> Also the subset names should probably be inferred only from the parquet/csv/json files and not from png/jpeg/wav/mp4 etc. WDYT ?\r\n\r\nThanks a lot for the review!\r\n\r\nYou're absolutely right — treating subsets as separate configs instead of overloaded splits makes much more sense. If that approach sounds good to you, I can move the grouping logic to `load.py`, where configs are instantiated, and revise the PR to emit one `BuilderConfig` per grouped subset.\r\n\r\nAlso totally agree on limiting grouping to structured file types — I’d scope this to `.json`, `.jsonl`, `.csv`, and `.parquet`.\r\n\r\nLet me know if this direction sounds good, and I’ll get started on the changes right away!\r\n",
"Hi! @lhoestq!"
] |
3,176,810,164
| 7,645
|
`ClassLabel` docs: Correct value for unknown labels
|
open
| 2025-06-25T20:01:35
| 2025-06-25T20:01:35
| null |
https://github.com/huggingface/datasets/pull/7645
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7645",
"html_url": "https://github.com/huggingface/datasets/pull/7645",
"diff_url": "https://github.com/huggingface/datasets/pull/7645.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7645.patch",
"merged_at": null
}
|
l-uuz
| true
|
[] |
3,176,363,492
| 7,644
|
fix sequence ci
|
closed
| 2025-06-25T17:07:55
| 2025-06-25T17:10:30
| 2025-06-25T17:08:01
|
https://github.com/huggingface/datasets/pull/7644
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7644",
"html_url": "https://github.com/huggingface/datasets/pull/7644",
"diff_url": "https://github.com/huggingface/datasets/pull/7644.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7644.patch",
"merged_at": "2025-06-25T17:08:01"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7644). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,176,354,431
| 7,643
|
Backward compat sequence instance
|
closed
| 2025-06-25T17:05:09
| 2025-06-25T17:07:40
| 2025-06-25T17:05:44
|
https://github.com/huggingface/datasets/pull/7643
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7643",
"html_url": "https://github.com/huggingface/datasets/pull/7643",
"diff_url": "https://github.com/huggingface/datasets/pull/7643.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7643.patch",
"merged_at": "2025-06-25T17:05:43"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7643). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,176,025,890
| 7,642
|
fix length for ci
|
closed
| 2025-06-25T15:10:38
| 2025-06-25T15:11:53
| 2025-06-25T15:11:51
|
https://github.com/huggingface/datasets/pull/7642
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7642",
"html_url": "https://github.com/huggingface/datasets/pull/7642",
"diff_url": "https://github.com/huggingface/datasets/pull/7642.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7642.patch",
"merged_at": "2025-06-25T15:11:51"
}
|
lhoestq
| true
|
[] |
3,175,953,405
| 7,641
|
update docs and docstrings
|
closed
| 2025-06-25T14:48:58
| 2025-06-25T14:51:46
| 2025-06-25T14:49:33
|
https://github.com/huggingface/datasets/pull/7641
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7641",
"html_url": "https://github.com/huggingface/datasets/pull/7641",
"diff_url": "https://github.com/huggingface/datasets/pull/7641.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7641.patch",
"merged_at": "2025-06-25T14:49:33"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7641). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,175,914,924
| 7,640
|
better features repr
|
closed
| 2025-06-25T14:37:32
| 2025-06-25T14:46:47
| 2025-06-25T14:46:45
|
https://github.com/huggingface/datasets/pull/7640
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7640",
"html_url": "https://github.com/huggingface/datasets/pull/7640",
"diff_url": "https://github.com/huggingface/datasets/pull/7640.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7640.patch",
"merged_at": "2025-06-25T14:46:45"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7640). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,175,616,169
| 7,639
|
fix save_infos
|
closed
| 2025-06-25T13:16:26
| 2025-06-25T13:19:33
| 2025-06-25T13:16:33
|
https://github.com/huggingface/datasets/pull/7639
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7639",
"html_url": "https://github.com/huggingface/datasets/pull/7639",
"diff_url": "https://github.com/huggingface/datasets/pull/7639.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7639.patch",
"merged_at": "2025-06-25T13:16:33"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7639). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,172,645,391
| 7,638
|
Add ignore_decode_errors option to Image feature for robust decoding #7612
|
open
| 2025-06-24T16:47:51
| 2025-07-04T07:07:30
| null |
https://github.com/huggingface/datasets/pull/7638
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7638",
"html_url": "https://github.com/huggingface/datasets/pull/7638",
"diff_url": "https://github.com/huggingface/datasets/pull/7638.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7638.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[
"cc @lhoestq",
"I think splitting the error handling for the main image decoding process and the metadata decoding process is possibly a bit nicer, as some images do render correctly, but their metadata might be invalid and cause the pipeline to fail, which I've encountered recently as in #7668.\r\n\r\nThe [`decode_image`](https://docs.pytorch.org/vision/main/generated/torchvision.io.decode_image.html) function in `torchvision` handles similar cases by using the `apply_exif_orientation` flag to turn off the exif metadata processing entirely.",
"> I think splitting the error handling for the main image decoding process and the metadata decoding process is possibly a bit nicer, as some images do render correctly, but their metadata might be invalid and cause the pipeline to fail, which I've encountered recently as in #7668.\r\n> The [`decode_image`](https://docs.pytorch.org/vision/main/generated/torchvision.io.decode_image.html) function in `torchvision` handles similar cases by using the `apply_exif_orientation` flag to turn off the exif metadata processing entirely.\r\n \r\n @lhoestq & @Seas0 — that makes total sense.\r\n \r\nCurrently, if EXIF metadata like `.getexif()` fails (due to malformed tags), the whole image gets dropped even if it renders correctly — not ideal.\r\n \r\nTo address this, I'm planning to split the EXIF handling into a separate `try/except` block, like:\r\n```python\r\ntry:\r\n exif = image.getexif()\r\n if exif.get(PIL.Image.ExifTags.Base.Orientation) is not None:\r\n image = PIL.ImageOps.exif_transpose(image)\r\nexcept Exception as exif_err:\r\n if self.ignore_decode_errors:\r\n warnings.warn(f\"[Image.decode_example] Skipped EXIF metadata: {exif_err}\")\r\n else:\r\n raise\r\n```\r\n\r\nSo that, Valid but EXIF-broken images will still be returned & EXIF failures will be skipped only if ignore_decode_errors=True. \r\n\r\nSounds good??",
"With the recent EXIF decoding isolation logic added, this PR now fully addresses:\r\n\r\n- ✅ #7612 – Robust iteration over corrupt samples (especially useful in `.streaming=True`)\r\n- ✅ #7632 – Graceful handling of invalid image files when using `cast_column(..., Image(...))`\r\n- ✅ #7668 – Broken EXIF metadata no longer crashes decoding; images are returned if usable\r\n\r\nAll decoding errors (including `.getexif()` and image file loading) are now skipped with a warning when `ignore_decode_errors=True`. This enables safe, scalable image preprocessing pipelines."
] |
3,171,883,522
| 7,637
|
Introduce subset_name as an alias of config_name
|
open
| 2025-06-24T12:49:01
| 2025-07-01T16:08:33
| null |
https://github.com/huggingface/datasets/issues/7637
| null |
albertvillanova
| false
|
[
"I second this! When you come from the Hub, the intuitive question is \"how do I set the subset name\", and it's not easily answered from the docs: `subset_name` would answer this directly.",
"I've submitted PR [#7657](https://github.com/huggingface/datasets/pull/7657) to introduce subset_name as a user-facing alias for name in load_dataset, keeping terminology consistent with the Hub UI (“Subset”). It’s fully backward-compatible and includes a conflict check.\n\nLet me know if you'd like me to include tests as part of the PR — happy to add them if needed!",
"The main usage is as a positional argument anyway, so I wouldn't necessarily agree that we need an alias (with the risk of confusing users). But happy to have more mentions in the docs of syntaxes like `load_dataset(\"dataset_name\", \"subset_name\")`",
"> The main usage is as a positional argument anyway, so I wouldn't necessarily agree that we need an alias (with the risk of confusing users). But happy to have more mentions in the docs of syntaxes like `load_dataset(\"dataset_name\", \"subset_name\")`\n\nThanks @lhoestq, totally fair point — especially with positional usage being the norm. I’m happy to align with the team’s direction here. If you'd prefer, I can update this PR to shift the focus to documentation/examples (e.g., showing \"subset_name\" as the second arg)."
] |
3,170,878,167
| 7,636
|
"open" in globals()["__builtins__"], an error occurs: "TypeError: argument of type 'module' is not iterable"
|
open
| 2025-06-24T08:09:39
| 2025-07-10T04:13:16
| null |
https://github.com/huggingface/datasets/issues/7636
| null |
kuanyan9527
| false
|
[
"@kuanyan9527 Your query is indeed valid. Following could be its reasoning:\n\nQuoting from https://stackoverflow.com/a/11181607:\n\"By default, when in the `__main__` module,` __builtins__` is the built-in module `__builtin__` (note: no 's'); when in any other module, `__builtins__` is an alias for the dictionary of the `__builtin__` module itself.\"\n\nCan you confirm if you are running the snippet `print(\"open\" in globals()[\"__builtins__\"])` in the default? In that case, as expected, `__builtins__` is a module which is causing the error. But in the codebase, the class `patch_submodule`, is primarily used in the second circumstance, where it acts as a dictionary. Hence causing the code to function successfully.\n\nHope this helps.",
"@kuanyan9527 Are there any more queries in this regards, else please feel free to close the issue.\nThank you.",
"Your answer is very important to me,thanks.",
"I encountered this error when running datasets with pypy,\n`TypeError: argument of type 'module' is not iterable` in [src/datasets/utils/patching.py#L96](https://github.com/huggingface/datasets/blob/3.6.0/src/datasets/utils/patching.py#L96)\nby modifying `globals()[\"__builtins__\"]` to `builtins.__dict__`, importing via `import builtins`.\nCan this be applied to the community?"
] |
3,170,486,408
| 7,635
|
Fix: Preserve float columns in JSON loader when values are integer-like (e.g. 0.0, 1.0)
|
open
| 2025-06-24T06:16:48
| 2025-06-24T06:16:48
| null |
https://github.com/huggingface/datasets/pull/7635
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7635",
"html_url": "https://github.com/huggingface/datasets/pull/7635",
"diff_url": "https://github.com/huggingface/datasets/pull/7635.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7635.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[] |
3,169,389,653
| 7,634
|
Replace Sequence by List
|
closed
| 2025-06-23T20:35:48
| 2025-06-25T13:59:13
| 2025-06-25T13:59:11
|
https://github.com/huggingface/datasets/pull/7634
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7634",
"html_url": "https://github.com/huggingface/datasets/pull/7634",
"diff_url": "https://github.com/huggingface/datasets/pull/7634.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7634.patch",
"merged_at": "2025-06-25T13:59:11"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7634). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,168,399,637
| 7,633
|
Proposal: Small Tamil Discourse Coherence Dataset.
|
open
| 2025-06-23T14:24:40
| 2025-06-23T14:24:40
| null |
https://github.com/huggingface/datasets/issues/7633
| null |
bikkiNitSrinagar
| false
|
[] |
3,168,283,589
| 7,632
|
Graceful Error Handling for cast_column("image", Image(decode=True)) in Hugging Face Datasets
|
open
| 2025-06-23T13:49:24
| 2025-07-08T06:52:53
| null |
https://github.com/huggingface/datasets/issues/7632
| null |
ganiket19
| false
|
[
"Hi! This is now handled in PR #7638",
"Thank you for implementing the suggestion it would be great help in our use case. "
] |
3,165,127,657
| 7,631
|
Pass user-agent from DownloadConfig into fsspec storage_options
|
open
| 2025-06-21T14:22:25
| 2025-06-21T14:25:28
| null |
https://github.com/huggingface/datasets/pull/7631
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7631",
"html_url": "https://github.com/huggingface/datasets/pull/7631",
"diff_url": "https://github.com/huggingface/datasets/pull/7631.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7631.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[
"- This PR assumes that `HfFileSystem` in `huggingface_hub` supports receiving `headers` in `storage_options`. If not, a follow-up PR can be opened to add this support to `HfFileSystem.__init__`.\r\n- No test was added for this since it’s a config passthrough. If needed, I’d be happy to add one."
] |
3,164,650,900
| 7,630
|
[bug] resume from ckpt skips samples if .map is applied
|
open
| 2025-06-21T01:50:03
| 2025-06-29T07:51:32
| null |
https://github.com/huggingface/datasets/issues/7630
| null |
felipemello1
| false
|
[
"Thanks for reporting this — it looks like a separate but related bug to #7538, which involved sample loss when resuming an `IterableDataset` wrapped in `FormattedExamplesIterable`. That was resolved in #7553 by re-batching the iterable to track offset correctly.\n\nIn this case, the issue seems to arise specifically from applying `.map()` before sharding and checkpointing. That wraps the iterable in `MappedExamplesIterable`, which may not preserve or propagate `shard_example_idx` correctly across `.state_dict()` and `.load_state_dict()` calls.\n\nYou can see that without `.map()`, resume works fine — but with `.map()`, it jumps from sample 9 to 50, skipping the rest of the shard.\n\nI'll dig deeper into how `MappedExamplesIterable` manages offsets and whether it supports proper checkpoint resumption. If not, we might need a fix similar to the one in #7553, or a wrapper to preserve resume metadata.\n\nHappy to help fix it!\n",
"Let me know if a dedicated test case is required — happy to add one!"
] |
3,161,169,782
| 7,629
|
Add test for `as_iterable_dataset()` method in DatasetBuilder
|
open
| 2025-06-19T19:23:55
| 2025-06-19T19:23:55
| null |
https://github.com/huggingface/datasets/pull/7629
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7629",
"html_url": "https://github.com/huggingface/datasets/pull/7629",
"diff_url": "https://github.com/huggingface/datasets/pull/7629.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7629.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[] |
3,161,156,461
| 7,628
|
Add `as_iterable_dataset()` method to DatasetBuilder for streaming from cached Arrow files
|
open
| 2025-06-19T19:15:41
| 2025-06-19T19:15:41
| null |
https://github.com/huggingface/datasets/pull/7628
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7628",
"html_url": "https://github.com/huggingface/datasets/pull/7628",
"diff_url": "https://github.com/huggingface/datasets/pull/7628.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7628.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[] |
3,160,544,390
| 7,627
|
Creating a HF Dataset from lakeFS with S3 storage takes too much time!
|
closed
| 2025-06-19T14:28:41
| 2025-06-23T12:39:10
| 2025-06-23T12:39:10
|
https://github.com/huggingface/datasets/issues/7627
| null |
Thunderhead-exe
| false
|
[
"### > Update\n\nThe bottleneck, from what I understand, was making one network request per file\n\nFor 30k images, this meant 30k separate GET requests to the MinIO server through the S3 API, and that was killing the performance\n\nUsing webDataset to transform the large number of files to few .tar files and passing “webdataset” instead of “imagefolder” to the load_dataset function worked perfectly (took only ~11s)"
] |
3,159,322,138
| 7,626
|
feat(map): reuse unchanged columns when input_columns specified to reduce disk usage (#6013)
|
closed
| 2025-06-19T07:41:45
| 2025-07-28T17:39:12
| 2025-07-28T17:39:12
|
https://github.com/huggingface/datasets/pull/7626
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7626",
"html_url": "https://github.com/huggingface/datasets/pull/7626",
"diff_url": "https://github.com/huggingface/datasets/pull/7626.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7626.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[] |
3,159,016,001
| 7,625
|
feat: Add h5folder dataset loader for HDF5 support
|
open
| 2025-06-19T05:39:10
| 2025-06-26T05:44:26
| null |
https://github.com/huggingface/datasets/pull/7625
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7625",
"html_url": "https://github.com/huggingface/datasets/pull/7625",
"diff_url": "https://github.com/huggingface/datasets/pull/7625.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7625.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7625). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I guess test failed cause import os, import h5py, and import datasets lines are not alphabetically sorted, or not grouped properly.\r\n\r\n\r\n",
"This commit was accidental - `[Merge branch 'main' into patch-4]`. The \r\n`[chore: fix import order in h5folder.py to satisfy linter]` should solve the import order issue. \r\n\r\n\r\n"
] |
3,156,136,624
| 7,624
|
#Dataset Make "image" column appear first in dataset preview UI
|
closed
| 2025-06-18T09:25:19
| 2025-06-20T07:46:43
| 2025-06-20T07:46:43
|
https://github.com/huggingface/datasets/issues/7624
| null |
jcerveto
| false
|
[
"Hi ! It should follow the same order as the order of the keys in the metadata file",
"Hi! Thank you for your answer. \n\nAs you said it, I I forced every key in every JSON to have an order using `collections. OrderedDict` in Python. Now, it works!\n\nTY"
] |
3,154,519,684
| 7,623
|
fix: raise error in FolderBasedBuilder when data_dir and data_files are missing
|
closed
| 2025-06-17T19:16:34
| 2025-06-18T14:18:41
| 2025-06-18T14:18:41
|
https://github.com/huggingface/datasets/pull/7623
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7623",
"html_url": "https://github.com/huggingface/datasets/pull/7623",
"diff_url": "https://github.com/huggingface/datasets/pull/7623.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7623.patch",
"merged_at": "2025-06-18T14:18:41"
}
|
ArjunJagdale
| true
|
[
"@lhoestq Moved the logic to FolderBasedBuilder._info() as discussed in previous PR (#7618). Let me know if anything else is needed — happy to update!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7623). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,154,398,557
| 7,622
|
Guard against duplicate builder_kwargs/config_kwargs in load_dataset_…
|
closed
| 2025-06-17T18:28:35
| 2025-07-23T14:06:20
| 2025-07-23T14:06:20
|
https://github.com/huggingface/datasets/pull/7622
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7622",
"html_url": "https://github.com/huggingface/datasets/pull/7622",
"diff_url": "https://github.com/huggingface/datasets/pull/7622.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7622.patch",
"merged_at": null
}
|
Shohail-Ismail
| true
|
[
"Hi folks, this PR fixes the duplicate-kwargs edge case and includes a unit test. Would love a review when you have a moment!\r\n\r\n@zach-huggingface\r\n@SunMarc "
] |
3,153,780,963
| 7,621
|
minor docs data aug
|
closed
| 2025-06-17T14:46:57
| 2025-06-17T14:50:28
| 2025-06-17T14:47:11
|
https://github.com/huggingface/datasets/pull/7621
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7621",
"html_url": "https://github.com/huggingface/datasets/pull/7621",
"diff_url": "https://github.com/huggingface/datasets/pull/7621.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7621.patch",
"merged_at": "2025-06-17T14:47:11"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7621). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,153,565,183
| 7,620
|
Fixes in docs
|
closed
| 2025-06-17T13:41:54
| 2025-06-17T13:58:26
| 2025-06-17T13:58:24
|
https://github.com/huggingface/datasets/pull/7620
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7620",
"html_url": "https://github.com/huggingface/datasets/pull/7620",
"diff_url": "https://github.com/huggingface/datasets/pull/7620.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7620.patch",
"merged_at": "2025-06-17T13:58:24"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7620). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,153,058,517
| 7,619
|
`from_list` fails while `from_generator` works for large datasets
|
open
| 2025-06-17T10:58:55
| 2025-06-29T16:34:44
| null |
https://github.com/huggingface/datasets/issues/7619
| null |
abdulfatir
| false
|
[
"@lhoestq any thoughts on this? ",
"Thanks for the report! This behavior is expected due to how `from_list()` and `from_generator()` differ internally.\n\n- `from_list()` builds the entire dataset in memory at once, which can easily exceed limits (especially with variable-length arrays or millions of rows). The Arrow error you're seeing (`Value too large to fit in C integer type`) is related to that memory overload.\n- `from_generator()` avoids this issue by batching and streaming data incrementally, which is much more memory-efficient.\n\nSo for large datasets like time series or NLP data with large arrays, `from_generator()` (or `datasets.IterableDataset`) is the recommended approach.\n\nHope this helps clarify the behavior — let me know if you'd like me to point to prior issues/discussions where similar tradeoffs came up!\n",
"@ArjunJagdale Yes, it is related to using large dataset but not in the way that you have described. As I understand, the problem here is that `datasets` does not use `LargeList` with 64-bit offsets from PyArrow when using `from_list`. However, with `from_generator` this seems to work okay, likely due to batching. As such, this is more like a bug than an expected outcome. If this is indeed \"expected\", `datasets` should fail more gracefully in these cases with a recommendation to use `from_generator`. ",
"Thanks for the clarification — you're absolutely right, this seems tied to the use of 32-bit list offsets in from_list() under the hood. That distinction between List and LargeList in PyArrow is a crucial one, and definitely worth highlighting in the docs or error message. Happy to help if a check or fallback to LargeList makes sense here."
] |
3,148,912,897
| 7,618
|
fix: raise error when folder-based datasets are loaded without data_dir or data_files
|
open
| 2025-06-16T07:43:59
| 2025-06-16T12:13:26
| null |
https://github.com/huggingface/datasets/pull/7618
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7618",
"html_url": "https://github.com/huggingface/datasets/pull/7618",
"diff_url": "https://github.com/huggingface/datasets/pull/7618.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7618.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[
"Great ! Since this logic is specific to one builder class maybe this check can be in the class definition ? I think you can put it in FolderBasedBuilder's `_info()` method."
] |
3,148,102,085
| 7,617
|
Unwanted column padding in nested lists of dicts
|
closed
| 2025-06-15T22:06:17
| 2025-06-16T13:43:31
| 2025-06-16T13:43:31
|
https://github.com/huggingface/datasets/issues/7617
| null |
qgallouedec
| false
|
[
"Answer from @lhoestq:\n\n> No\n> This is because Arrow and Parquet a columnar format: they require a fixed type for each column. So if you have nested dicts, each item should have the same subfields\n\nThe way around I found is the handle it after sampling with this function:\n\n```python\ndef remove_padding(example):\n if isinstance(example, list):\n return [remove_padding(value) if isinstance(value, (dict, list)) else value for value in example]\n elif isinstance(example, Mapping):\n return {\n key: remove_padding(value) if isinstance(value, (dict, list)) else value\n for key, value in example.items()\n if value is not None\n }\n else:\n raise TypeError(\"Input must be a list or a dictionary.\")\n\n# Example:\nexample = next(iter(dataset))\nexample = remove_padding(example)\n```"
] |
3,144,506,665
| 7,616
|
Torchcodec decoding
|
closed
| 2025-06-13T19:06:07
| 2025-06-19T18:25:49
| 2025-06-19T18:25:49
|
https://github.com/huggingface/datasets/pull/7616
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7616",
"html_url": "https://github.com/huggingface/datasets/pull/7616",
"diff_url": "https://github.com/huggingface/datasets/pull/7616.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7616.patch",
"merged_at": "2025-06-19T18:25:48"
}
|
TyTodd
| true
|
[
"@lhoestq any updates on when this will be merged? Let me know if theres anything you need from my end.",
"Btw I plan to release `datasets` 4.0 after your PR, this will be a major milestone :)",
"@lhoestq just pushed the new changes.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7616). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Great ! I took the liberty to move the AudioDecoder to its own file and make small edits in the docs and docstrings\r\n\r\nIf it looks good to you I think we can merge :)"
] |
3,143,443,498
| 7,615
|
remove unused code
|
closed
| 2025-06-13T12:37:30
| 2025-06-13T12:39:59
| 2025-06-13T12:37:40
|
https://github.com/huggingface/datasets/pull/7615
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7615",
"html_url": "https://github.com/huggingface/datasets/pull/7615",
"diff_url": "https://github.com/huggingface/datasets/pull/7615.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7615.patch",
"merged_at": "2025-06-13T12:37:40"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7615). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,143,381,638
| 7,614
|
Lazy column
|
closed
| 2025-06-13T12:12:57
| 2025-06-17T13:08:51
| 2025-06-17T13:08:49
|
https://github.com/huggingface/datasets/pull/7614
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7614",
"html_url": "https://github.com/huggingface/datasets/pull/7614",
"diff_url": "https://github.com/huggingface/datasets/pull/7614.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7614.patch",
"merged_at": "2025-06-17T13:08:49"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7614). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,142,819,991
| 7,613
|
fix parallel push_to_hub in dataset_dict
|
closed
| 2025-06-13T09:02:24
| 2025-06-13T12:30:23
| 2025-06-13T12:30:22
|
https://github.com/huggingface/datasets/pull/7613
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7613",
"html_url": "https://github.com/huggingface/datasets/pull/7613",
"diff_url": "https://github.com/huggingface/datasets/pull/7613.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7613.patch",
"merged_at": "2025-06-13T12:30:22"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7613). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,141,905,049
| 7,612
|
Provide an option of robust dataset iterator with error handling
|
open
| 2025-06-13T00:40:48
| 2025-06-24T16:52:30
| null |
https://github.com/huggingface/datasets/issues/7612
| null |
wwwjn
| false
|
[
"Hi ! Maybe we can add a parameter to the Image() type to make it to return `None` instead of raising an error in case of corruption ? Would that help ?",
"Hi! 👋🏼 I just opened PR [#7638](https://github.com/huggingface/datasets/pull/7638) to address this issue.\n\n### 🔧 What it does:\nIt adds an `ignore_decode_errors` flag to the `Image` feature. When set to `True`, corrupted image samples will be skipped (with a warning), and `None` will be returned instead of raising an exception.\n\nThis allows users to stream datasets that may contain some invalid images without breaking the iteration loop:\n\n```python\nfeatures = Features({\n \"image\": Image(decode=True, ignore_decode_errors=True)\n})\n````\n\n### 🧩 Why this helps:\n\n* Prevents full iteration breakdown during `.streaming=True` usage\n* Enables downstream tooling like Flux (see [[Flux#1290](https://github.com/pytorch/torchtitan/pull/1290)](https://github.com/pytorch/torchtitan/pull/1290)) to implement robust loaders now that `datasets` supports graceful handling\n* Keeps current behavior unchanged unless explicitly opted-in\n\nLet me know if you'd like me to follow up with test coverage or additional enhancements!\n\ncc @lhoestq "
] |
3,141,383,940
| 7,611
|
Code example for dataset.add_column() does not reflect correct way to use function
|
closed
| 2025-06-12T19:42:29
| 2025-07-17T13:14:18
| 2025-07-17T13:14:18
|
https://github.com/huggingface/datasets/issues/7611
| null |
shaily99
| false
|
[
"Hi @shaily99 \n\nThanks for pointing this out — you're absolutely right!\n\nThe current example in the docstring for add_column() implies in-place modification, which is misleading since add_column() actually returns a new dataset.",
"#self-assign\n"
] |
3,141,281,560
| 7,610
|
i cant confirm email
|
open
| 2025-06-12T18:58:49
| 2025-06-27T14:36:47
| null |
https://github.com/huggingface/datasets/issues/7610
| null |
lykamspam
| false
|
[
"Will you please clarify the issue by some screenshots or more in-depth explanation?",
"\nThis is clarify answer. I have not received a letter.\n\n**The graphic at the top shows how I don't get any letter. Can you show in a clear way how you don't get a letter from me?**"
] |
3,140,373,128
| 7,609
|
Update `_dill.py` to use `co_linetable` for Python 3.10+ in place of `co_lnotab`
|
closed
| 2025-06-12T13:47:01
| 2025-06-16T12:14:10
| 2025-06-16T12:14:08
|
https://github.com/huggingface/datasets/pull/7609
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7609",
"html_url": "https://github.com/huggingface/datasets/pull/7609",
"diff_url": "https://github.com/huggingface/datasets/pull/7609.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7609.patch",
"merged_at": "2025-06-16T12:14:08"
}
|
qgallouedec
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7609). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"not 100% sure either, I tried removing unnecessary checks - let me know if they sound good to you otherwise I'll revert",
"I can't reproduce the warning anymore... 🤦🏻♂️\r\n",
"Ah now I can reproduce!, and I can confirm that the warning is gone when you apply the change in this PR"
] |
3,137,564,259
| 7,608
|
Tests typing and fixes for push_to_hub
|
closed
| 2025-06-11T17:13:52
| 2025-06-12T21:15:23
| 2025-06-12T21:15:21
|
https://github.com/huggingface/datasets/pull/7608
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7608",
"html_url": "https://github.com/huggingface/datasets/pull/7608",
"diff_url": "https://github.com/huggingface/datasets/pull/7608.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7608.patch",
"merged_at": "2025-06-12T21:15:21"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7608). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,135,722,560
| 7,607
|
Video and audio decoding with torchcodec
|
closed
| 2025-06-11T07:02:30
| 2025-06-19T18:25:49
| 2025-06-19T18:25:49
|
https://github.com/huggingface/datasets/issues/7607
| null |
TyTodd
| false
|
[
"Good idea ! let me know if you have any question or if I can help",
"@lhoestq Almost finished, but I'm having trouble understanding this test case.\nThis is how it looks originally. The `map` function is called, and then `with_format` is called. According to the test case example[\"video\"] is supposed to be a VideoReader. However, according to the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.Dataset.with_format) its supposed to be the type passed into `with_format` (numpy in this case). My implementation with VideoDecoder currently does the latter, is that correct, or should it be a VideoDecoder object instead?\n```\n@require_torchvision\ndef test_dataset_with_video_map_and_formatted(shared_datadir):\n from torchvision.io import VideoReader\n\n video_path = str(shared_datadir / \"test_video_66x50.mov\")\n data = {\"video\": [video_path]}\n features = Features({\"video\": Video()})\n dset = Dataset.from_dict(data, features=features)\n dset = dset.map(lambda x: x).with_format(\"numpy\")\n example = dset[0]\n assert isinstance(example[\"video\"], VideoReader)\n # assert isinstance(example[\"video\"][0], np.ndarray)\n\n # from bytes\n with open(video_path, \"rb\") as f:\n data = {\"video\": [f.read()]}\n dset = Dataset.from_dict(data, features=features)\n dset = dset.map(lambda x: x).with_format(\"numpy\")\n example = dset[0]\n assert isinstance(example[\"video\"], VideoReader)\n # assert isinstance(example[\"video\"][0], np.ndarray)\n\n```",
"Hi ! It's maybe more convenient for users to always have a VideoDecoder, since they might only access a few frames and not the full video. So IMO it's fine to always return a VideoDecoder (maybe later we can extend the VideoDecoder to return other types of tensors than numpy arrays though ? 👀 it's not crucial for now though)",
"@lhoestq ya that makes sense, looks like this functionality lives in `src/datasets/formatting`, where an exception is made for VideoReader objects to remain as themselves when being formatted. I'll make the necessary changes. ",
"@lhoestq I'm assuming this was also the case for torchaudio objects?",
"We're not using torchaudio but soundfile. But anyway we unfortunately decode full audio files instead of returning a Reader and it can be interesting to fix this. Currently it always returns a dict {\"array\": np.array(...), \"sampling_rate\": int(...)}, while it would be cool to return a reader with seek() and read() - like methods as for videos.\n\n(there is a way to make the audio change backward compatible anyway by allowing `reader[\"array\"]` to return the full array)",
"@lhoestq (sorry for the spam btw)\nLooks like there's a # TODO to have these returned as np.arrays instead. I'm curious why the authors didn't do it initially. Maybe a performance thing?\nThis is from `/src/datasets/formatting/np_formatter.py` line 70\n```\nif config.TORCHVISION_AVAILABLE and \"torchvision\" in sys.modules:\n from torchvision.io import VideoReader\n\n if isinstance(value, VideoReader):\n return value # TODO(QL): set output to np arrays ?\n```",
"Oh cool ya this is something that I could implement with torchcodec. I can add that to the PR as well.",
"> Looks like there's a # TODO to have these returned as np.arrays instead. I'm curious why the authors didn't do it initially. Maybe a performance thing?\n\nyea that was me, I focused on a simple logic to start with, since I knew there was torchcodec coming and maybe wasn't worth it at the time ^^\n\nbut anyway it's fine to start with a logic without formatting to start with and then iterate",
"Hey @lhoestq I ran into an error with this test case for the Audio feature\n\n```\n@require_sndfile\n@require_torchcodec\ndef test_dataset_with_audio_feature_map_is_decoded(shared_datadir):\n audio_path = str(shared_datadir / \"test_audio_44100.wav\")\n data = {\"audio\": [audio_path], \"text\": [\"Hello\"]}\n features = Features({\"audio\": Audio(), \"text\": Value(\"string\")})\n dset = Dataset.from_dict(data, features=features)\n\n def process_audio_sampling_rate_by_example(example):\n sample_rate = example[\"audio\"].get_all_samples().sample_rate\n example[\"double_sampling_rate\"] = 2 * sample_rate\n return example\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_example)\n for item in decoded_dset.cast_column(\"audio\", Audio(decode=False)):\n assert item.keys() == {\"audio\", \"text\", \"double_sampling_rate\"}\n assert item[\"double_sampling_rate\"] == 88200\n\n def process_audio_sampling_rate_by_batch(batch):\n double_sampling_rates = []\n for audio in batch[\"audio\"]:\n double_sampling_rates.append(2 * audio.get_all_samples().sample_rate)\n batch[\"double_sampling_rate\"] = double_sampling_rates\n return batch\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_batch, batched=True)\n for item in decoded_dset.cast_column(\"audio\", Audio(decode=False)):\n assert item.keys() == {\"audio\", \"text\", \"double_sampling_rate\"}\n assert item[\"double_sampling_rate\"] == 88200\n```\n\nthis is the error below\n```\nsrc/datasets/arrow_writer.py:626: in write_batch\n arrays.append(pa.array(typed_sequence))\n.....\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_decoded - pyarrow.lib.ArrowInvalid: Could not convert <torchcodec.decoders._audio_decoder.AudioDecoder object at 0x138cdd810> with type AudioDecoder: did not recognize Python value type when inferring an Arrow data type\n```\n\nBy the way I copied the test case and ran it on the original implementation of the Video feature, which uses the torchvision backend and I got a similar error.\n```\ndef test_dataset_with_video_feature_map_is_decoded(shared_datadir):\n video_path = str(shared_datadir / \"test_video_66x50.mov\")\n data = {\"video\": [video_path], \"text\": [\"Hello\"]}\n features = Features({\"video\": Video(), \"text\": Value(\"string\")})\n dset = Dataset.from_dict(data, features=features)\n\n def process_audio_sampling_rate_by_example(example):\n metadata = example[\"video\"].get_metadata()\n example[\"double_fps\"] = 2 * metadata[\"video\"][\"fps\"][0]\n return example\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_example)\n for item in decoded_dset.cast_column(\"video\", Video(decode=False)):\n assert item.keys() == {\"video\", \"text\", \"double_fps\"}\n assert item[\"double_fps\"] == 2 * 10 # prollly wont work past 2*10 is made up!! shouldn't pass\n\n def process_audio_sampling_rate_by_batch(batch):\n double_fps = []\n for video in batch[\"video\"]:\n double_fps.append(2 * video.metadata.begin_stream_seconds)\n batch[\"double_fps\"] = double_fps\n return batch\n\n decoded_dset = dset.map(process_audio_sampling_rate_by_batch, batched=True)\n for item in decoded_dset.cast_column(\"video\", Video(decode=False)):\n assert item.keys() == {\"video\", \"text\", \"double_fps\"}\n assert item[\"double_fps\"] == 2 * 10 # prollly wont work past this no reason it should\n```\n\nI was wondering if these error's are expected. They seem to be coming from the fact that the function `_cast_to_python_objects` in `src/datasets/features/features.py` doesn't handle VideoDecoders or AudioDecoders. I was able to fix it and get rid of the error by adding this to the bottom of the function\n```\n elif config.TORCHCODEC_AVAILABLE and \"torchcodec\" in sys.modules and isinstance(obj, VideoDecoder):\n v = Video()\n return v.encode_example(obj), True\n elif config.TORCHCODEC_AVAILABLE and \"torchcodec\" in sys.modules and isinstance(obj, AudioDecoder):\n a = Audio()\n return a.encode_example(obj), True\n```\nThis fixed it, but I just want to make sure I'm not adding things that are messing up the intended functionality.",
"This is the right fix ! :)",
"Btw I just remembered that we were using soundfile because it can support a wide range of audio formats, is it also the case for torchcodec ? including ogg, opus for example",
"Yes from what I understand torchcodec supports everything ffmpeg supports.",
"Okay just finished. However, I wasn't able to pass this test case:\n```python\n@require_torchcodec\n@require_sndfile\n@pytest.mark.parametrize(\"streaming\", [False, True])\ndef test_load_dataset_with_audio_feature(streaming, jsonl_audio_dataset_path, shared_datadir):\n from torchcodec.decoders import AudioDecoder\n audio_path = str(shared_datadir / \"test_audio_44100.wav\")\n data_files = jsonl_audio_dataset_path\n features = Features({\"audio\": Audio(), \"text\": Value(\"string\")})\n dset = load_dataset(\"json\", split=\"train\", data_files=data_files, features=features, streaming=streaming)\n item = dset[0] if not streaming else next(iter(dset))\n assert item.keys() == {\"audio\", \"text\"}\n assert isinstance(item[\"audio\"], AudioDecoder)\n samples = item[\"audio\"].get_all_samples()\n assert samples.sample_rate == 44100\n assert samples.data.shape == (1, 202311)\n```\n\nIt returned this error\n```\nstreaming = False, jsonl_audio_dataset_path = '/private/var/folders/47/c7dlgs_n6lx8rtr8f5w5m1m00000gn/T/pytest-of-tytodd/pytest-103/data2/audio_dataset.jsonl'\nshared_datadir = PosixPath('/private/var/folders/47/c7dlgs_n6lx8rtr8f5w5m1m00000gn/T/pytest-of-tytodd/pytest-103/test_load_dataset_with_audio_f0/data')\n\n @require_torchcodec\n @require_sndfile\n @pytest.mark.parametrize(\"streaming\", [False, True])\n def test_load_dataset_with_audio_feature(streaming, jsonl_audio_dataset_path, shared_datadir):\n from torchcodec.decoders import AudioDecoder\n audio_path = str(shared_datadir / \"test_audio_44100.wav\")\n data_files = jsonl_audio_dataset_path\n features = Features({\"audio\": Audio(), \"text\": Value(\"string\")})\n> dset = load_dataset(\"json\", split=\"train\", data_files=data_files, features=features, streaming=streaming)\n\ntests/features/test_audio.py:686: \n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\nsrc/datasets/load.py:1418: in load_dataset\n builder_instance.download_and_prepare(\nsrc/datasets/builder.py:925: in download_and_prepare\n self._download_and_prepare(\nsrc/datasets/builder.py:1019: in _download_and_prepare\n verify_splits(self.info.splits, split_dict)\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\n\nexpected_splits = {'train': SplitInfo(name='train', num_bytes=2351563, num_examples=10000, shard_lengths=None, dataset_name=None), 'validation': SplitInfo(name='validation', num_bytes=238418, num_examples=1000, shard_lengths=None, dataset_name=None)}\nrecorded_splits = {'train': SplitInfo(name='train', num_bytes=167, num_examples=1, shard_lengths=None, dataset_name='json')}\n\n def verify_splits(expected_splits: Optional[dict], recorded_splits: dict):\n if expected_splits is None:\n logger.info(\"Unable to verify splits sizes.\")\n return\n if len(set(expected_splits) - set(recorded_splits)) > 0:\n> raise ExpectedMoreSplitsError(str(set(expected_splits) - set(recorded_splits)))\nE datasets.exceptions.ExpectedMoreSplitsError: {'validation'}\n\nsrc/datasets/utils/info_utils.py:68: ExpectedMoreSplitsError\n```\n\nIt looks like this test case wasn't passing when I forked the repo, so I assume I didn't do anything to break it. I also added this case to `test_video.py`, and it fails there as well. If this looks good, I'll go ahead and submit the PR.",
"Awesome ! yes feel free to submit the PR, I can see what I can do for the remaining tests",
"@lhoestq just submitted it #7616 "
] |
3,133,848,546
| 7,606
|
Add `num_proc=` to `.push_to_hub()` (Dataset and IterableDataset)
|
closed
| 2025-06-10T14:35:10
| 2025-06-11T16:47:28
| 2025-06-11T16:47:25
|
https://github.com/huggingface/datasets/pull/7606
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7606",
"html_url": "https://github.com/huggingface/datasets/pull/7606",
"diff_url": "https://github.com/huggingface/datasets/pull/7606.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7606.patch",
"merged_at": "2025-06-11T16:47:25"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7606). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,131,636,882
| 7,605
|
Make `push_to_hub` atomic (#7600)
|
closed
| 2025-06-09T22:29:38
| 2025-06-23T19:32:08
| 2025-06-23T19:32:08
|
https://github.com/huggingface/datasets/pull/7605
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7605",
"html_url": "https://github.com/huggingface/datasets/pull/7605",
"diff_url": "https://github.com/huggingface/datasets/pull/7605.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7605.patch",
"merged_at": null
}
|
sharvil
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7605). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi ! unfortunately we can't allow atomic commits for commits with hundreds of files additions (HF would time out)\r\n\r\nMaybe an alternative would be to retry if there was a commit in between ? this could be the default behavior as well",
"Thanks for taking a look – much appreciated!\r\n\r\nI've verified that commits with up to 20,000 files don't time out and the commit time scales linearly with the number of operations enqueued. It took just under 2 minutes to complete (successfully) the 20k file commit.\r\n\r\nThe fundamental issue I'm trying to tackle here is dataset corruption: getting into a state where a dataset on the hub cannot be used when downloaded. Non-atomic commits won't get us there, I think. If, for example, 3 of 5 commits complete and the machine/process calling `push_to_hub` has a network, hardware, or other failure that prevents it from completing the rest of the commits (even with retries) we'll now have some pointer files pointing to the new data and others pointing to the old data => corrupted. While this may seem like an unlikely scenario, it's a regular occurrence at scale.\r\n\r\nIf you still feel strongly that atomic commits are not the right way to go, I can either set it to not be the default or remove it entirely from this PR.\r\n\r\nAs for retries, it's a good idea. In a non-atomic world, the logic gets more complicated:\r\n- keep an explicit queue of pending add/delete operations\r\n- chunkwise pop from queue and commit with `parent_commit` set to previous chunked commit hash\r\n- if `create_commit` fails:\r\n - re-fetch README and set `parent_commit` to latest hash for `revision`\r\n - re-generate dataset card content\r\n - swap old `CommitOperationAdd` with new one for README in the pending queue\r\n- resume chunkwise committing from the queue as above\r\n\r\nEntirely doable, but more involved than I signed up for with this PR.",
"Just to clarify – setting the `parent_commit` can be separated from making the commit atomic (which is what I'm suggesting by either atomic commits not the default or removing it from this PR). It's crucial to set the parent commit to avoid the read-modify-write race condition on the README schema."
] |
3,130,837,169
| 7,604
|
Docs and more methods for IterableDataset: push_to_hub, to_parquet...
|
closed
| 2025-06-09T16:44:40
| 2025-06-10T13:15:23
| 2025-06-10T13:15:21
|
https://github.com/huggingface/datasets/pull/7604
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7604",
"html_url": "https://github.com/huggingface/datasets/pull/7604",
"diff_url": "https://github.com/huggingface/datasets/pull/7604.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7604.patch",
"merged_at": "2025-06-10T13:15:21"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7604). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,130,394,563
| 7,603
|
No TF in win tests
|
closed
| 2025-06-09T13:56:34
| 2025-06-09T15:33:31
| 2025-06-09T15:33:30
|
https://github.com/huggingface/datasets/pull/7603
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7603",
"html_url": "https://github.com/huggingface/datasets/pull/7603",
"diff_url": "https://github.com/huggingface/datasets/pull/7603.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7603.patch",
"merged_at": "2025-06-09T15:33:30"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7603). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,128,758,924
| 7,602
|
Enhance error handling and input validation across multiple modules
|
open
| 2025-06-08T23:01:06
| 2025-06-08T23:01:06
| null |
https://github.com/huggingface/datasets/pull/7602
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7602",
"html_url": "https://github.com/huggingface/datasets/pull/7602",
"diff_url": "https://github.com/huggingface/datasets/pull/7602.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7602.patch",
"merged_at": null
}
|
mohiuddin-khan-shiam
| true
|
[] |
3,127,296,182
| 7,600
|
`push_to_hub` is not concurrency safe (dataset schema corruption)
|
closed
| 2025-06-07T17:28:56
| 2025-07-31T10:00:50
| 2025-07-31T10:00:50
|
https://github.com/huggingface/datasets/issues/7600
| null |
sharvil
| false
|
[
"@lhoestq can you please take a look? I've submitted a PR that fixes this issue. Thanks.",
"Thanks for the ping ! As I said in https://github.com/huggingface/datasets/pull/7605 there is maybe a more general approach using retries :)",
"Dropping this due to inactivity; we've implemented push_to_hub outside of HF datasets that's concurrency safe. Feel free to use the code I provided as a starting point if there's still interest in addressing this issue.",
"Exploring another fix here: https://github.com/huggingface/datasets/issues/7600"
] |
3,125,620,119
| 7,599
|
My already working dataset (when uploaded few months ago) now is ignoring metadata.jsonl
|
closed
| 2025-06-06T18:59:00
| 2025-06-16T15:18:00
| 2025-06-16T15:18:00
|
https://github.com/huggingface/datasets/issues/7599
| null |
JuanCarlosMartinezSevilla
| false
|
[
"Maybe its been a recent update, but i can manage to load the metadata.jsonl separately from the images with:\n\n```\nmetadata = load_dataset(\"PRAIG/SMB\", split=\"train\", data_files=[\"*.jsonl\"])\nimages = load_dataset(\"PRAIG/SMB\", split=\"train\")\n```\nDo you know it this is an expected behaviour? This makes my dataset viewer to only load the images without the labeling of metadata.jsonl.\n\nThanks",
"Hi ! this is because we now expect the metadata file to be inside the directory named after the split \"train\" (this way each split can have its own metadata and can be loaded independently)\n\nYou can fix that by configuring it explicitly in the dataset's README.md header:\n\n```yaml\nconfigs:\n- config_name: default\n data_files:\n - split: train\n path:\n - \"train/**/*.png\"\n - \"metadata.jsonl\"\n```\n\n(or by moving the metadata.jsonl in train/ but in this case you also have to modify the content of the JSONL to fix the relative paths to the images)",
"Thank you very much, dataset viewer is already working as expected!!"
] |
3,125,184,457
| 7,598
|
fix string_to_dict usage for windows
|
closed
| 2025-06-06T15:54:29
| 2025-06-06T16:12:22
| 2025-06-06T16:12:21
|
https://github.com/huggingface/datasets/pull/7598
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7598",
"html_url": "https://github.com/huggingface/datasets/pull/7598",
"diff_url": "https://github.com/huggingface/datasets/pull/7598.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7598.patch",
"merged_at": "2025-06-06T16:12:21"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7598). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,123,962,709
| 7,597
|
Download datasets from a private hub in 2025
|
closed
| 2025-06-06T07:55:19
| 2025-06-13T13:46:00
| 2025-06-13T13:46:00
|
https://github.com/huggingface/datasets/issues/7597
| null |
DanielSchuhmacher
| false
|
[
"Hi ! First, and in the general case, Hugging Face does offer to host private datasets, and with a subscription you can even choose the region in which the repositories are hosted (US, EU)\n\nThen if you happen to have a private deployment, you can set the HF_ENDPOINT environment variable (same as in https://github.com/huggingface/transformers/issues/38634)",
"Thank you @lhoestq. Works as described!"
] |
3,122,595,042
| 7,596
|
Add albumentations to use dataset
|
closed
| 2025-06-05T20:39:46
| 2025-06-17T18:38:08
| 2025-06-17T14:44:30
|
https://github.com/huggingface/datasets/pull/7596
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7596",
"html_url": "https://github.com/huggingface/datasets/pull/7596",
"diff_url": "https://github.com/huggingface/datasets/pull/7596.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7596.patch",
"merged_at": "2025-06-17T14:44:30"
}
|
ternaus
| true
|
[
"@lhoestq ping",
"@lhoestq ping",
"@lhoestq Thanks. Cleaned up torchvision."
] |
3,121,689,436
| 7,595
|
Add `IterableDataset.push_to_hub()`
|
closed
| 2025-06-05T15:29:32
| 2025-06-06T16:12:37
| 2025-06-06T16:12:36
|
https://github.com/huggingface/datasets/pull/7595
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7595",
"html_url": "https://github.com/huggingface/datasets/pull/7595",
"diff_url": "https://github.com/huggingface/datasets/pull/7595.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7595.patch",
"merged_at": "2025-06-06T16:12:36"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7595). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,120,799,626
| 7,594
|
Add option to ignore keys/columns when loading a dataset from jsonl(or any other data format)
|
open
| 2025-06-05T11:12:45
| 2025-06-28T09:03:00
| null |
https://github.com/huggingface/datasets/issues/7594
| null |
avishaiElmakies
| false
|
[
"Good point, I'd be in favor of having the `columns` argument in `JsonConfig` (and the others) to align with `ParquetConfig` to let users choose which columns to load and ignore the rest",
"Is it possible to ignore columns when using parquet? ",
"Yes, you can pass `columns=...` to load_dataset to select which columns to load, and it is passed to `ParquetConfig` :)",
"Ok, i didn't know that. \nAnyway, it would be good to add this to others",
"Hi @lhoestq \n\nI'd like to take this up!\n\nAs you suggested, I’ll extend the support for the columns parameter (currently used in ParquetConfig) to JsonConfig as well. This will allow users to selectively load specific keys/columns from .jsonl (or .json) files and ignore the rest — solving the type inconsistency issues in unclean datasets.",
"Hi @avishaiElmakies and @lhoestq \n\nJust wanted to let you know that this is now implemented in #7594\nAs suggested, support for the `columns=...` argument (previously available for Parquet) has now been extended to **JSON and JSONL** loading via `load_dataset(...)`. You can now load only specific keys/columns and skip the rest — which should help in cases where some fields are unclean, inconsistent, or just unnecessary.\n\n### ✅ Example:\n\n```python\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"json\", data_files=\"your_data.jsonl\", columns=[\"id\", \"title\"])\nprint(dataset[\"train\"].column_names)\n# Output: ['id', 'title']\n```\n\n### 🔧 Summary of changes:\n\n* Added `columns: Optional[List[str]]` to `JsonConfig`\n* Updated `_generate_tables()` to filter selected columns\n* Forwarded `columns` argument from `load_dataset()` to the config\n* Added test case to validate behavior\n\nLet me know if you'd like the same to be added for CSV or others as a follow-up — happy to help.",
"@ArjunJagdale this looks great! Thanks!\nI believe that every format that is supported by `datasets` should probably have this feature since it is very useful and will streamline the api (people will know that they can just use `columns` to select the columns they want, and it will not be dependent on the data format) ",
"Thanks @avishaiElmakies — totally agree, making `columns=...` support consistent across all formats would be really helpful for users."
] |
3,118,812,368
| 7,593
|
Fix broken link to albumentations
|
closed
| 2025-06-04T19:00:13
| 2025-06-05T16:37:02
| 2025-06-05T16:36:32
|
https://github.com/huggingface/datasets/pull/7593
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7593",
"html_url": "https://github.com/huggingface/datasets/pull/7593",
"diff_url": "https://github.com/huggingface/datasets/pull/7593.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7593.patch",
"merged_at": "2025-06-05T16:36:32"
}
|
ternaus
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7593). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@lhoestq ping"
] |
3,118,203,880
| 7,592
|
Remove scripts altogether
|
closed
| 2025-06-04T15:14:11
| 2025-08-04T15:17:05
| 2025-06-09T16:45:27
|
https://github.com/huggingface/datasets/pull/7592
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7592",
"html_url": "https://github.com/huggingface/datasets/pull/7592",
"diff_url": "https://github.com/huggingface/datasets/pull/7592.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7592.patch",
"merged_at": "2025-06-09T16:45:27"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7592). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi @lhoestq,\r\nI wanted to ask\r\nare you planning to stop supporting dataset builds using `GeneratorBasedBuilder`?\r\n\r\nIf so, could you share the reason why?",
"We stopped supporting dataset scripts altogether, whether they are based on GeneratorBasedBuilder or any other builder. This means you can't `load_dataset()` a dataset script anymore. We did this mostly for security reasons which is blocking for many users and also impossible to build upon (e.g. the for the Dataset Viewer on HF)",
"Ah, so only the `trust_remote_code` feature of `load_dataset` is deprecated, and\r\n\r\n```python\r\nfrom datasets import load_dataset_builder\r\n \r\nbuilder = load_dataset_builder('cornell-movie-review-data/rotten_tomatoes') \r\nbuilder.download_and_prepare() \r\n```\r\n\r\nwe can still load data using `load_dataset_builder` and `download_and_prepare`, right?\r\nThat's a relief. I thought the removal of `trust_remote_code` in `load_dataset` meant `GeneratorBasedBuilder` was being deprecated too, haha.\r\nGot it, thanks for the clarification!\r\n",
"Can you give an example on how to upgrade from using `trust_remote_code`? I used to load_dataset from a script generating my training data in a streaming way.",
"For guys who dislike this change +1"
] |
3,117,816,388
| 7,591
|
Add num_proc parameter to push_to_hub
|
open
| 2025-06-04T13:19:15
| 2025-06-27T06:13:54
| null |
https://github.com/huggingface/datasets/issues/7591
| null |
SwayStar123
| false
|
[
"Hi @SwayStar123 \n\nI'd be interested in taking this up. I plan to add a `num_proc` parameter to `push_to_hub()` and use parallel uploads for shards using `concurrent.futures`. Will explore whether `ThreadPoolExecutor` or `ProcessPoolExecutor` is more suitable based on current implementation. Let me know if that sounds good!\n",
"Just a quick update — `push_to_hub()` already had the `num_proc` argument in its signature and was correctly passing it internally to `_push_parquet_shards_to_hub()`.\n\nThe actual change required was inside `_push_parquet_shards_to_hub()` to enable parallel shard uploads using `multiprocessing` when `num_proc > 1`.\n\n@lhoestq @SwayStar123 ",
"> Hi @SwayStar123 \n> \n> I'd be interested in taking this up. I plan to add a `num_proc` parameter to `push_to_hub()` and use parallel uploads for shards using `concurrent.futures`. Will explore whether `ThreadPoolExecutor` or `ProcessPoolExecutor` is more suitable based on current implementation. Let me know if that sounds good!\n> \n\nHey thanks for working on it. But I'm not a hf dev so I don't know the best way to do it."
] |
3,101,654,892
| 7,590
|
`Sequence(Features(...))` causes PyArrow cast error in `load_dataset` despite correct schema.
|
closed
| 2025-05-29T22:53:36
| 2025-07-19T22:45:08
| 2025-07-19T22:45:08
|
https://github.com/huggingface/datasets/issues/7590
| null |
AHS-uni
| false
|
[
"Hi @lhoestq \n\nCould you help confirm whether this qualifies as a bug?\n\nIt looks like the issue stems from how `Sequence(Features(...))` is interpreted as a plain struct during schema inference, which leads to a mismatch when casting with PyArrow (especially with nested structs inside lists). From the description, this seems like an inconsistency with expected behavior.\n\nIf confirmed, I’d be happy to take a shot at investigating and potentially submitting a fix.\n\nAlso looping in @AHS-uni — could you kindly share a minimal JSONL example that reproduces this?\n\nThanks!",
"Hello @Flink-ddd \n\nI updated the minimal example and included both JSON and JSONL minimal examples in the Colab notebook. \n\nHere is the minimal JSON file for convenience (can't upload JSONL files).\n\n[mini.json](https://github.com/user-attachments/files/20535145/mini.json)\n\nI've also found a number of issues which describe a similar problem:\n\n[7569](https://github.com/huggingface/datasets/issues/7569) (Open)\n[7137](https://github.com/huggingface/datasets/issues/7137) (Open)\n[7501](https://github.com/huggingface/datasets/issues/7501) (Closed)\n[2434](https://github.com/huggingface/datasets/issues/2434) (Closed)\n\nThe closed issues don't really address the problem (IMO). [7501](https://github.com/huggingface/datasets/issues/7501) provides a workaround (using a Python list instead of `Sequence`), but it seem precarious. ",
"Hi ! `Sequence({...})` corresponds to a struct of lists ([docs](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/main_classes#datasets.Features)). This come from Tensorflow Datasets.\n\nIf you want to use a list of structs, you should use `[{...}]`, e.g.\n\n```python\nitem = {\n \"id\": Value(\"string\"),\n \"data\": Value(\"string\"),\n}\n\nfeatures = Features({\n \"list\": [item],\n})\n```",
"@lhoestq Thanks for your explanation, which helps me understand the logic behind. But I'm confused how to define that in `README.md`?\n\nMy jsonl data is: \n```\n{\"answers\": [{\"text\": \"text1\", \"label\": \"label1\"}, {\"text\": \"text2\", \"label\": \"label2\"},]}\n{\"answers\": [{\"text\": \"text1\", \"label\": \"label1\"}, {\"text\": \"text2\", \"label\": \"label2\"},]}\n...\n```\n\nMy README.md look like\n```\ndataset_info:\n- config_name: default\n features:\n - name: answers\n sequence:\n - name: text\n dtype: string\n - name: label\n dtype: string\n```\nI understand `sequence` here is not correct, but what's the correct format? I tried following (`sequence -> dtype`)and seems not the case:\n```\ndataset_info:\n- config_name: default\n features:\n - name: answers\n dtype:\n - name: text\n sequence: string\n - name: label\n sequence: string\n```",
"The `List` type which doesn't have the weird dict behavior of `Sequence` has been added for `datasets` 4.0 (to be released next week). Feel free to install `datasets` from source to try it out :)\nEDIT: it's out !\n\nYou can fix the issue using `List` instead of `Sequence`, e.g. in the case of the original post:\n\n```python\n# Feature spec with List of structs\nitem = {\n \"id\": Value(\"string\"),\n \"data\": Value(\"string\"),\n}\n\nfeatures = Features({\n \"list\": List(item),\n})\n```\n\nfor which the README.md is\n\n```yaml\ndataset_info:\n- config_name: default\n features:\n - name: list\n list:\n - name: id\n dtype: string\n - name: data\n dtype: string\n```",
"@lhoestq Thanks! I didn't realize there is a `list` keyword I could use. I thought I had to use `dtype` or something. Hope there could be better documentation on the `README.md` formats. I've closed my issue #7137 "
] |
3,101,119,704
| 7,589
|
feat: use content defined chunking
|
open
| 2025-05-29T18:19:41
| 2025-08-13T18:53:03
| null |
https://github.com/huggingface/datasets/pull/7589
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7589",
"html_url": "https://github.com/huggingface/datasets/pull/7589",
"diff_url": "https://github.com/huggingface/datasets/pull/7589.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7589.patch",
"merged_at": null
}
|
kszucs
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7589). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Need to set `DEFAULT_MAX_BATCH_SIZE = 1024 * 1024`",
"We should consider enabling page indexes by default when writing parquet files to enable page pruning readers like the next dataset viewer https://github.com/huggingface/dataset-viewer/pull/3199",
"> Need to set DEFAULT_MAX_BATCH_SIZE = 1024 * 1024\r\n\r\nmaybe we'll need to auto-tweak the row group size to aim for a [30MB-300MB] interval, or we can end up with multiple GBs row groups",
"> maybe we'll need to auto-tweak the row group size to aim for a [30MB-300MB] interval, or we can end up with multiple GBs row groups\r\n\r\n> We should consider enabling page indexes by default when writing parquet files to enable page pruning readers like the next dataset viewer https://github.com/huggingface/dataset-viewer/pull/3199\r\n\r\nwould it make sense to use the default row group size, and expect the readers will rely on the pages index to fetch only the required bits? Not sure if it exists in duckdb.",
"> would it make sense to use the default row group size, and expect the readers will rely on the pages index to fetch only the required bits? Not sure if it exists in duckdb.\r\n\r\nmost frameworks read row group by row group, that's why we need them to be of reasonable size anyways",
"> We should consider enabling page indexes by default when writing parquet files to enable page pruning readers like the next dataset viewer https://github.com/huggingface/dataset-viewer/pull/3199\r\n\r\n<strike>where would the page indexes be stored? in the custom section in the Parquet file metadata? Is it standardized or ad hoc?</strike>\r\n\r\nOK, I just [RTFM](https://arrow.apache.org/docs/python/generated/pyarrow.parquet.write_table.html#pyarrow.parquet.write_table):\r\n\r\n> `write_page_index`: [`bool`](https://docs.python.org/3/library/stdtypes.html#bltin-boolean-values), default [False](https://docs.python.org/3/library/constants.html#False)\r\n>\r\n> Whether to write a page index in general for all columns. Writing statistics to the page index disables the old method of writing statistics to each data page header. The page index makes statistics-based filtering more efficient than the page header, as it gathers all the statistics for a Parquet file in a single place, avoiding scattered I/O. Note that the page index is not yet used on the read size by PyArrow.\r\n\r\n"
] |
3,094,012,025
| 7,588
|
ValueError: Invalid pattern: '**' can only be an entire path component [Colab]
|
closed
| 2025-05-27T13:46:05
| 2025-05-30T13:22:52
| 2025-05-30T01:26:30
|
https://github.com/huggingface/datasets/issues/7588
| null |
wkambale
| false
|
[
"Could you please run the following code snippet in your environment and share the exact output? This will help check for any compatibility issues within the env itself. \n\n```\nimport datasets\nimport huggingface_hub\nimport fsspec\n\nprint(\"datasets version:\", datasets.__version__)\nprint(\"huggingface_hub version:\", huggingface_hub.__version__)\nprint(\"fsspec version:\", fsspec.__version__)\n```",
"```bash\ndatasets version: 2.14.4\nhuggingface_hub version: 0.31.4\nfsspec version: 2025.3.2\n```",
"Version 2.14.4 is not the latest version available, in fact it is from August 08, 2023 (you can check here: https://pypi.org/project/datasets/#history)\n\nUse pip install datasets==3.6.0 to install a more recent version (from May 7, 2025)\n\nI also had the same problem with Colab, after updating to the latest version it was solved.\n\nI hope it helps",
"thank you @CleitonOERocha. it sure did help.\n\nupdating `datasets` to v3.6.0 and keeping `fsspec` on v2025.3.2 eliminates the issue.",
"Very helpful, thank you!"
] |
3,091,834,987
| 7,587
|
load_dataset splits typing
|
closed
| 2025-05-26T18:28:40
| 2025-05-26T18:31:10
| 2025-05-26T18:29:57
|
https://github.com/huggingface/datasets/pull/7587
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7587",
"html_url": "https://github.com/huggingface/datasets/pull/7587",
"diff_url": "https://github.com/huggingface/datasets/pull/7587.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7587.patch",
"merged_at": "2025-05-26T18:29:57"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7587). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,091,320,431
| 7,586
|
help is appreciated
|
open
| 2025-05-26T14:00:42
| 2025-05-26T18:21:57
| null |
https://github.com/huggingface/datasets/issues/7586
| null |
rajasekarnp1
| false
|
[
"how is this related to this repository ?"
] |
3,091,227,921
| 7,585
|
Avoid multiple default config names
|
closed
| 2025-05-26T13:27:59
| 2025-06-05T12:41:54
| 2025-06-05T12:41:52
|
https://github.com/huggingface/datasets/pull/7585
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7585",
"html_url": "https://github.com/huggingface/datasets/pull/7585",
"diff_url": "https://github.com/huggingface/datasets/pull/7585.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7585.patch",
"merged_at": "2025-06-05T12:41:52"
}
|
albertvillanova
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7585). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,090,255,023
| 7,584
|
Add LMDB format support
|
open
| 2025-05-26T07:10:13
| 2025-05-26T18:23:37
| null |
https://github.com/huggingface/datasets/issues/7584
| null |
trotsky1997
| false
|
[
"Hi ! Can you explain what's your use case ? Is it about converting LMDB to Dataset objects (i.e. converting to Arrow) ?"
] |
3,088,987,757
| 7,583
|
load_dataset type stubs reject List[str] for split parameter, but runtime supports it
|
closed
| 2025-05-25T02:33:18
| 2025-05-26T18:29:58
| 2025-05-26T18:29:58
|
https://github.com/huggingface/datasets/issues/7583
| null |
hierr
| false
|
[] |
3,083,515,643
| 7,582
|
fix: Add embed_storage in Pdf feature
|
closed
| 2025-05-22T14:06:29
| 2025-05-22T14:17:38
| 2025-05-22T14:17:36
|
https://github.com/huggingface/datasets/pull/7582
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7582",
"html_url": "https://github.com/huggingface/datasets/pull/7582",
"diff_url": "https://github.com/huggingface/datasets/pull/7582.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7582.patch",
"merged_at": "2025-05-22T14:17:36"
}
|
AndreaFrancis
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7582). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,083,080,413
| 7,581
|
Add missing property on `RepeatExamplesIterable`
|
closed
| 2025-05-22T11:41:07
| 2025-06-05T12:41:30
| 2025-06-05T12:41:29
|
https://github.com/huggingface/datasets/pull/7581
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7581",
"html_url": "https://github.com/huggingface/datasets/pull/7581",
"diff_url": "https://github.com/huggingface/datasets/pull/7581.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7581.patch",
"merged_at": "2025-06-05T12:41:29"
}
|
SilvanCodes
| true
|
[] |
3,082,993,027
| 7,580
|
Requesting a specific split (eg: test) still downloads all (train, test, val) data when streaming=False.
|
open
| 2025-05-22T11:08:16
| 2025-05-26T18:40:31
| null |
https://github.com/huggingface/datasets/issues/7580
| null |
s3pi
| false
|
[
"Hi ! There was a PR open to improve this: https://github.com/huggingface/datasets/pull/6832 \nbut it hasn't been continued so far.\n\nIt would be a cool improvement though !"
] |
3,081,849,022
| 7,579
|
Fix typos in PDF and Video documentation
|
closed
| 2025-05-22T02:27:40
| 2025-05-22T12:53:49
| 2025-05-22T12:53:47
|
https://github.com/huggingface/datasets/pull/7579
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7579",
"html_url": "https://github.com/huggingface/datasets/pull/7579",
"diff_url": "https://github.com/huggingface/datasets/pull/7579.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7579.patch",
"merged_at": "2025-05-22T12:53:47"
}
|
AndreaFrancis
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7579). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,080,833,740
| 7,577
|
arrow_schema is not compatible with list
|
closed
| 2025-05-21T16:37:01
| 2025-05-26T18:49:51
| 2025-05-26T18:32:55
|
https://github.com/huggingface/datasets/issues/7577
| null |
jonathanshen-upwork
| false
|
[
"Thanks for reporting, I'll look into it",
"Actually it looks like you just forgot parenthesis:\n\n```diff\n- f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})\n+ f = datasets.Features({'x': list([datasets.Value(dtype='int32')])})\n```\n\nor simply using the `[ ]` syntax:\n\n```python\nf = datasets.Features({'x':[datasets.Value(dtype='int32')]})\n```\n\nI'm closing this issue if you don't mind",
"Ah is that what the syntax is? I don't think I was able to find an actual example of it so I assumed it was in the same way that you specify types eg. `list[int]`. This is good to know, thanks."
] |
3,080,450,538
| 7,576
|
Fix regex library warnings
|
closed
| 2025-05-21T14:31:58
| 2025-06-05T13:35:16
| 2025-06-05T12:37:55
|
https://github.com/huggingface/datasets/pull/7576
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7576",
"html_url": "https://github.com/huggingface/datasets/pull/7576",
"diff_url": "https://github.com/huggingface/datasets/pull/7576.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7576.patch",
"merged_at": "2025-06-05T12:37:55"
}
|
emmanuel-ferdman
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7576). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,080,228,718
| 7,575
|
[MINOR:TYPO] Update save_to_disk docstring
|
closed
| 2025-05-21T13:22:24
| 2025-06-05T12:39:13
| 2025-06-05T12:39:13
|
https://github.com/huggingface/datasets/pull/7575
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7575",
"html_url": "https://github.com/huggingface/datasets/pull/7575",
"diff_url": "https://github.com/huggingface/datasets/pull/7575.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7575.patch",
"merged_at": "2025-06-05T12:39:13"
}
|
cakiki
| true
|
[] |
3,079,641,072
| 7,574
|
Missing multilingual directions in IWSLT2017 dataset's processing script
|
open
| 2025-05-21T09:53:17
| 2025-05-26T18:36:38
| null |
https://github.com/huggingface/datasets/issues/7574
| null |
andy-joy-25
| false
|
[
"I have opened 2 PRs on the Hub: `https://huggingface.co/datasets/IWSLT/iwslt2017/discussions/7` and `https://huggingface.co/datasets/IWSLT/iwslt2017/discussions/8` to resolve this issue",
"cool ! I pinged the owners of the dataset on HF to merge your PRs :)"
] |
3,076,415,382
| 7,573
|
No Samsum dataset
|
closed
| 2025-05-20T09:54:35
| 2025-07-21T18:34:34
| 2025-06-18T12:52:23
|
https://github.com/huggingface/datasets/issues/7573
| null |
IgorKasianenko
| false
|
[
"According to the following https://huggingface.co/posts/seawolf2357/424129432408590, as of now the dataset seems to be inaccessible.\n\n@IgorKasianenko, would https://huggingface.co/datasets/knkarthick/samsum suffice for your purpose?\n",
"Thanks @SP1029 for the update!\nThat will work for now, using it as replacement. Is there a officially recommended way to maintain the CC licensed dataset under the organization account? \nFeel free to close this issue",
"> Is there an officially recommended way to maintain a CC-licensed dataset under an organizational account?\n\n@IgorKasianenko, apologies, this is not my area of expertise.\n\n> Please feel free to close this issue.\n\nI have limited access and may not be able to do that. Since you opened it, you would be able to close it.",
"dataset_samsum = load_dataset(\"knkarthick/samsum\")\n\nis working"
] |
3,074,529,251
| 7,572
|
Fixed typos
|
closed
| 2025-05-19T17:16:59
| 2025-06-05T12:25:42
| 2025-06-05T12:25:41
|
https://github.com/huggingface/datasets/pull/7572
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7572",
"html_url": "https://github.com/huggingface/datasets/pull/7572",
"diff_url": "https://github.com/huggingface/datasets/pull/7572.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7572.patch",
"merged_at": "2025-06-05T12:25:41"
}
|
TopCoder2K
| true
|
[
"@lhoestq, mentioning in case you haven't seen this PR. The contribution is very small and easy to check :)"
] |
3,074,116,942
| 7,571
|
fix string_to_dict test
|
closed
| 2025-05-19T14:49:23
| 2025-05-19T14:52:24
| 2025-05-19T14:49:28
|
https://github.com/huggingface/datasets/pull/7571
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7571",
"html_url": "https://github.com/huggingface/datasets/pull/7571",
"diff_url": "https://github.com/huggingface/datasets/pull/7571.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7571.patch",
"merged_at": "2025-05-19T14:49:28"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7571). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,065,966,529
| 7,570
|
Dataset lib seems to broke after fssec lib update
|
closed
| 2025-05-15T11:45:06
| 2025-06-13T00:44:27
| 2025-06-13T00:44:27
|
https://github.com/huggingface/datasets/issues/7570
| null |
sleepingcat4
| false
|
[
"Hi, can you try updating `datasets` ? Colab still installs `datasets` 2.x by default, instead of 3.x\n\nIt would be cool to also report this to google colab, they have a GitHub repo for this IIRC",
"@lhoestq I have updated it to `datasets==3.6.0` and now there's an entirely different issue on colab while locally its fine. \n\n```\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_auth.py:94: UserWarning: \nThe secret `HF_TOKEN` does not exist in your Colab secrets.\nTo authenticate with the Hugging Face Hub, create a token in your settings tab (https://huggingface.co/settings/tokens), set it as secret in your Google Colab and restart your session.\nYou will be able to reuse this secret in all of your notebooks.\nPlease note that authentication is recommended but still optional to access public models or datasets.\n warnings.warn(\nREADME.md: 100%\n 2.88k/2.88k [00:00<00:00, 166kB/s]\nsuno.jsonl.zst: 100%\n 221M/221M [00:05<00:00, 48.6MB/s]\nGenerating train split: \n 18633/0 [00:01<00:00, 13018.92 examples/s]\n---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\n[/usr/local/lib/python3.11/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)\n 1870 try:\n-> 1871 writer.write_table(table)\n 1872 except CastError as cast_error:\n\n17 frames\nTypeError: Couldn't cast array of type\nstruct<id: string, type: string, infill: bool, source: string, continue_at: double, infill_dur_s: double, infill_end_s: double, infill_start_s: double, include_future_s: double, include_history_s: double, infill_context_end_s: double, infill_context_start_s: int64>\nto\n{'id': Value(dtype='string', id=None), 'type': Value(dtype='string', id=None), 'infill': Value(dtype='bool', id=None), 'source': Value(dtype='string', id=None), 'continue_at': Value(dtype='float64', id=None), 'include_history_s': Value(dtype='float64', id=None)}\n\nThe above exception was the direct cause of the following exception:\n\nDatasetGenerationError Traceback (most recent call last)\n[/usr/local/lib/python3.11/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)\n 1896 if isinstance(e, DatasetGenerationError):\n 1897 raise\n-> 1898 raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\n 1899 \n 1900 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)\n\nDatasetGenerationError: An error occurred while generating the dataset\n```",
"@lhoestq opps sorry the dataset was in .zst which was causing this error rather than being a datasets library fault. After upgrading dataset version Colab is working fine. "
] |
3,061,234,054
| 7,569
|
Dataset creation is broken if nesting a dict inside a dict inside a list
|
open
| 2025-05-13T21:06:45
| 2025-05-20T19:25:15
| null |
https://github.com/huggingface/datasets/issues/7569
| null |
TimSchneider42
| false
|
[
"Hi ! That's because Séquence is a type that comes from tensorflow datasets and inverts lists and focus when doing Séquence(dict).\n\nInstead you should use a list. In your case\n```python\nfeatures = Features({\n \"a\": [{\"b\": {\"c\": Value(\"string\")}}]\n})\n```",
"Hi,\n\nThanks for the swift reply! Could you quickly clarify a couple of points?\n\n1. Is there any benefit in using Sequence over normal lists? Especially for longer lists (in my case, up to 256 entries)\n2. When exactly can I use Sequence? If there is a maximum of one level of dictionaries inside, then it's always fine?\n3. When creating the data in the generator, do I need to swap lists and dicts manually, or does that happen automatically?\n\nAlso, the documentation does not seem to mention this limitation of the Sequence type anywhere and encourages users to use it [here](https://huggingface.co/docs/datasets/en/about_dataset_features). In fact, I did not even know that just using a Python list was an option. Maybe the documentation can be improved to mention the limitations of Sequence and highlight that lists can be used instead.\n\nThanks a lot in advance!\n\nBest,\nTim"
] |
3,060,515,257
| 7,568
|
`IterableDatasetDict.map()` call removes `column_names` (in fact info.features)
|
open
| 2025-05-13T15:45:42
| 2025-06-30T09:33:47
| null |
https://github.com/huggingface/datasets/issues/7568
| null |
mombip
| false
|
[
"Hi ! IterableDataset doesn't know what's the output of the function you pass to map(), so it's not possible to know in advance the features of the output dataset.\n\nThere is a workaround though: either do `ds = ds.map(..., features=features)`, or you can do `ds = ds._resolve_features()` which iterates on the first rows to infer the dataset features.",
"Thank you. I understand that “IterableDataset doesn't know what's the output of the function”—that’s true, but:\n\nUnfortunately, the workaround you proposed **doesn’t solve** the problem. `ds.map()` is called multiple times by third-party code (i.e. `SFTTrainer`). To apply your approach, I would have to modify external library code. That’s why I decided to patch the _class_ rather than update `dataset` _objects_ (in fact, updating the object after `map()` was my initial approach, but then I realized I’m not the only one mapping an already-mapped dataset.)\n\nAs a user, I expected that after mapping I would get a new dataset with the correct column names. If, for some reason, that can’t be the default behavior, I would expect an argument—i.e. `auto_resolve_features: bool = False` — to control how my dataset is mapped if following mapping operation are called.\n\nIt’s also problematic that `column_names` are tied to `features`, which is even more confusing and forces you to inspect the source code to understand what’s going on.\n\n**New version of workaround:**\n```python\ndef patch_iterable_dataset_map():\n _orig_map = IterableDataset.map\n\n def _patched_map(self, *args, **kwargs):\n ds = _orig_map(self, *args, **kwargs)\n return ds._resolve_features()\n\n IterableDataset.map = _patched_map\n```",
"I see, maybe `.resolve_features()` should be called by default in this case in the SFTTrainer ? (or pass `features=` if the data processing always output the same features)\n\nWe can even support a new parameter `features=\"infer\"` if it would be comfortable to not use internal methods in SFTTrainer",
"I think most straightforward solution would be to reinitialize `features` from data after mapping if `feature` argument is not passed. I hink it is more intuitive behavior than just cleaning features. There is also problem in usage `.resolve_features()` in this context. I observed that it leads to `_head()` method execution and it then causes that 5 batches from dataset are iterated (`_head()` defaults to 5 batches). \nI'm not sure how it influences whole process. Are those 5 batches (in my case it's 5000 rows) used only to find `features`. Does final training/eval process \"see\" this items? How it affects IterableDataset state (current position)?",
"I checked the source code and while it indeed iterates on the first 5 rows. As a normal iteration, it does record the state in case you call `.state_dict()`, but it doesn't change the starting state. The starting state is always the beginning of the dataset, unless it is explicitly set with `.load_state_dict()`. To be clear, if you iterate on the dataset after `._resolve_features()`, it will start from the beginning of the dataset (or from a state you manually pass using `.load_state_dict()`)",
"Hi!\nI’ve opened a PR #7658 to address this issue.\n\nThe fix ensures that info.features is only updated if features is not None, preventing accidental loss of schema and column_names.\nPlease let me know if you see any edge cases or have additional concerns!\nAlso, if a test is needed for this case, happy to discuss—the fix is small, but I can add one if the maintainers prefer.\n\nThanks everyone for the clear diagnosis and suggestions in this thread!"
] |
3,058,308,538
| 7,567
|
interleave_datasets seed with multiple workers
|
open
| 2025-05-12T22:38:27
| 2025-06-29T06:53:59
| null |
https://github.com/huggingface/datasets/issues/7567
| null |
jonathanasdf
| false
|
[
"Hi ! It's already the case IIRC: the effective seed looks like `seed + worker_id`. Do you have a reproducible example ?",
"here is an example with shuffle\n\n```\nimport itertools\nimport datasets\nimport multiprocessing\nimport torch.utils.data\n\n\ndef gen(shard):\n worker_info = torch.utils.data.get_worker_info()\n for i in range(10):\n yield {'value': i, 'worker_id': worker_info.id}\n\n\ndef main():\n ds = datasets.IterableDataset.from_generator(gen, gen_kwargs={'shard': list(range(8))})\n ds = ds.shuffle(buffer_size=100, seed=1234)\n dataloader = torch.utils.data.DataLoader(ds, batch_size=None, num_workers=8)\n for i, ex in enumerate(itertools.islice(dataloader, 50)):\n print(i, ex)\n\n\nif __name__ == '__main__':\n multiprocessing.set_start_method('spawn')\n main()\n```\n\n```\npython test.py\n0 {'value': 8, 'worker_id': 0}\n1 {'value': 8, 'worker_id': 1}\n2 {'value': 8, 'worker_id': 2}\n3 {'value': 8, 'worker_id': 3}\n4 {'value': 8, 'worker_id': 4}\n5 {'value': 8, 'worker_id': 5}\n6 {'value': 8, 'worker_id': 6}\n7 {'value': 8, 'worker_id': 7}\n8 {'value': 9, 'worker_id': 0}\n9 {'value': 9, 'worker_id': 1}\n10 {'value': 9, 'worker_id': 2}\n11 {'value': 9, 'worker_id': 3}\n12 {'value': 9, 'worker_id': 4}\n13 {'value': 9, 'worker_id': 5}\n14 {'value': 9, 'worker_id': 6}\n15 {'value': 9, 'worker_id': 7}\n16 {'value': 5, 'worker_id': 0}\n17 {'value': 5, 'worker_id': 1}\n18 {'value': 5, 'worker_id': 2}\n19 {'value': 5, 'worker_id': 3}\n```",
"With `interleave_datasets`\n\n```\nimport itertools\nimport datasets\nimport multiprocessing\nimport torch.utils.data\n\n\ndef gen(shard, value):\n while True:\n yield {'value': value}\n\n\ndef main():\n ds = [\n datasets.IterableDataset.from_generator(gen, gen_kwargs={'shard': list(range(8)), 'value': i})\n for i in range(10)\n ]\n ds = datasets.interleave_datasets(ds, probabilities=[1 / len(ds)] * len(ds), seed=1234)\n dataloader = torch.utils.data.DataLoader(ds, batch_size=None, num_workers=8)\n for i, ex in enumerate(itertools.islice(dataloader, 50)):\n print(i, ex)\n\n\nif __name__ == '__main__':\n multiprocessing.set_start_method('spawn')\n main()\n```\n\n```\npython test.py\n0 {'value': 9}\n1 {'value': 9}\n2 {'value': 9}\n3 {'value': 9}\n4 {'value': 9}\n5 {'value': 9}\n6 {'value': 9}\n7 {'value': 9}\n8 {'value': 3}\n9 {'value': 3}\n10 {'value': 3}\n11 {'value': 3}\n12 {'value': 3}\n13 {'value': 3}\n14 {'value': 3}\n15 {'value': 3}\n16 {'value': 9}\n17 {'value': 9}\n18 {'value': 9}\n19 {'value': 9}\n20 {'value': 9}\n21 {'value': 9}\n22 {'value': 9}\n23 {'value': 9}\n```",
"Same results after updating to datasets 3.6.0.",
"Ah my bad, `shuffle()` uses a global effective seed which is something like `seed + epoch`, which is used to do the same shards shuffle in each worker so that each worker have a non-overlapping set of shards:\n\nhttps://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/iterable_dataset.py#L2102-L2111\n\nI think we should take into account the `worker_id` in a local seed for the buffer right after this line:\n\nhttps://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/iterable_dataset.py#L2151-L2153\n\nlike adding a new step that would propagate in the examples iterables or something like that:\n\n```python\nex_iterable = ex_iterable.shift_rngs(value=worker_id)\n```\n\nis this something you'd like to explore ? contributions on this subject are very welcome",
"Potentially, but busy. If anyone wants to take this up please feel free to, otherwise I may or may not revisit when I have free time.\n\nFor what it's worth I got around this with\n\n```\n\nclass SeedGeneratorWithWorkerIterable(iterable_dataset._BaseExamplesIterable):\n \"\"\"ExamplesIterable that seeds the rng with worker id.\"\"\"\n\n def __init__(\n self,\n ex_iterable: iterable_dataset._BaseExamplesIterable,\n generator: np.random.Generator,\n rank: int = 0,\n ):\n \"\"\"Constructor.\"\"\"\n super().__init__()\n self.ex_iterable = ex_iterable\n self.generator = generator\n self.rank = rank\n\n def _init_state_dict(self) -> dict:\n self._state_dict = self.ex_iterable._init_state_dict()\n return self._state_dict\n\n def __iter__(self):\n \"\"\"Data iterator.\"\"\"\n effective_seed = copy.deepcopy(self.generator).integers(0, 1 << 63) - self.rank\n effective_seed = (1 << 63) + effective_seed if effective_seed < 0 else effective_seed\n generator = np.random.default_rng(effective_seed)\n self.ex_iterable = self.ex_iterable.shuffle_data_sources(generator)\n if self._state_dict:\n self._state_dict = self.ex_iterable._init_state_dict()\n yield from iter(self.ex_iterable)\n\n def shuffle_data_sources(self, generator):\n \"\"\"Shuffle data sources.\"\"\"\n ex_iterable = self.ex_iterable.shuffle_data_sources(generator)\n return SeedGeneratorWithWorkerIterable(ex_iterable, generator=generator, rank=self.rank)\n\n def shard_data_sources(self, num_shards: int, index: int, contiguous=True): # noqa: FBT002\n \"\"\"Shard data sources.\"\"\"\n ex_iterable = self.ex_iterable.shard_data_sources(num_shards, index, contiguous=contiguous)\n return SeedGeneratorWithWorkerIterable(ex_iterable, generator=self.generator, rank=index)\n\n @property\n def is_typed(self):\n return self.ex_iterable.is_typed\n\n @property\n def features(self):\n return self.ex_iterable.features\n\n @property\n def num_shards(self) -> int:\n \"\"\"Number of shards.\"\"\"\n return self.ex_iterable.num_shards\n```",
"Thanks for the detailed insights!\n\nAfter reviewing the issue and the current implementation in `iterable_dataset.py`, I can confirm the cause:\n\nWhen using `interleave_datasets(..., seed=...)` with `num_workers > 1` (e.g. via `DataLoader`), the same RNG state is shared across workers — which leads to each worker producing identical sample sequences. This is because the seed is not modulated by `worker_id`, unlike the usual approach in `shuffle()` where seed is adjusted using the `epoch`.\n\nAs @lhoestq suggested, a proper fix would involve introducing something like:\n\n```python\nex_iterable = ex_iterable.shift_rngs(worker_id)\n```\n\n@jonathanasdf Also really appreciate the workaround implementation shared above — that was helpful to validate the behavior and will help shape the general solution."
] |
3,055,279,344
| 7,566
|
terminate called without an active exception; Aborted (core dumped)
|
open
| 2025-05-11T23:05:54
| 2025-06-23T17:56:02
| null |
https://github.com/huggingface/datasets/issues/7566
| null |
alexey-milovidov
| false
|
[
"@alexey-milovidov I followed the code snippet, but am able to successfully execute without any error. Could you please verify if the error persists or there is any additional details.",
"@alexey-milovidov else if the problem does not exist please feel free to close this issue.",
"```\nmilovidov@milovidov-pc:~/work/datasets$ \n./main.py \nResolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25868/25868 [00:05<00:00, 4753.90it/s]\nResolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25868/25868 [00:00<00:00, 238798.85it/s]\n{'text': \"How AP reported in all formats from tornado-stricken regionsMarch 8, 2012\\nWhen the first serious bout of tornadoes of 2012 blew through middle America in the middle of the night, they touched down in places hours from any AP bureau. Our closest video journalist was Chicago-based Robert Ray, who dropped his plans to travel to Georgia for Super Tuesday, booked several flights to the cities closest to the strikes and headed for the airport. He’d decide once there which flight to take.\\nHe never got on board a plane. Instead, he ended up driving toward Harrisburg, Ill., where initial reports suggested a town was destroyed. That decision turned out to be a lucky break for the AP. Twice.\\nRay was among the first journalists to arrive and he confirmed those reports -- in all formats. He shot powerful video, put victims on the phone with AP Radio and played back sound to an editor who transcribed the interviews and put the material on text wires. He then walked around the devastation with the Central Regional Desk on the line, talking to victims with the phone held so close that editors could transcribe his interviews in real time.\\nRay also made a dramatic image of a young girl who found a man’s prosthetic leg in the rubble, propped it up next to her destroyed home and spray-painted an impromptu sign: “Found leg. Seriously.”\\nThe following day, he was back on the road and headed for Georgia and a Super Tuesday date with Newt Gingrich’s campaign. The drive would take him through a stretch of the South that forecasters expected would suffer another wave of tornadoes.\\nTo prevent running into THAT storm, Ray used his iPhone to monitor Doppler radar, zooming in on extreme cells and using Google maps to direct himself to safe routes. And then the journalist took over again.\\n“When weather like that occurs, a reporter must seize the opportunity to get the news out and allow people to see, hear and read the power of nature so that they can take proper shelter,” Ray says.\\nSo Ray now started to use his phone to follow the storms. He attached a small GoPro camera to his steering wheel in case a tornado dropped down in front of the car somewhere, and took video of heavy rain and hail with his iPhone. Soon, he spotted a tornado and the chase was on. He followed an unmarked emergency vehicle to Cleveland, Tenn., where he was first on the scene of the storm's aftermath.\\nAgain, the tornadoes had struck in locations that were hours from the nearest AP bureau. Damage and debris, as well as a wickedly violent storm that made travel dangerous, slowed our efforts to get to the news. That wasn’t a problem in Tennessee, where our customers were well served by an all-formats report that included this text story.\\n“CLEVELAND, Tenn. (AP) _ Fierce wind, hail and rain lashed Tennessee for the second time in three days, and at least 15 people were hospitalized Friday in the Chattanooga area.”\\nThe byline? Robert Ray.\\nFor being adept with technology, chasing after news as it literally dropped from the sky and setting a standard for all-formats reporting that put the AP ahead on the most competitive news story of the day, Ray wins this week’s $300 Best of the States prize.\\n© 2013 The Associated Press. All rights reserved. Terms and conditions apply. See AP.org for details.\", 'id': '<urn:uuid:d66bc6fe-8477-4adf-b430-f6a558ccc8ff>', 'dump': 'CC-MAIN-2013-20', 'url': 'http://%20jwashington@ap.org/Content/Press-Release/2012/How-AP-reported-in-all-formats-from-tornado-stricken-regions', 'date': '2013-05-18T05:48:54Z', 'file_path': 's3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz', 'language': 'en', 'language_score': 0.9721424579620361, 'token_count': 717}\nterminate called without an active exception\nAborted (core dumped)\nmilovidov@milovidov-pc:~/work/datasets$ \npython3 --version\nPython 3.10.12\n```",
"Thank you @alexey-milovidov for the details, was able to reproduce the issue.\n\nFollowing is a preliminary analysis which would help to further isolate the issue:\nOn local: \n- For alternate datasets e.g. `speed/english_quotes_paraphrase` instead of `HuggingFaceFW/fineweb` the code works\n- Multiple calls of `print(next(iter(dataset)))` can be performed successfully before the `terminate` is raised, indicating possibility of issue when connection is closed\n\nOn colab:\n- The above code works properly"
] |
3,051,731,207
| 7,565
|
add check if repo exists for dataset uploading
|
open
| 2025-05-09T10:27:00
| 2025-06-09T14:39:23
| null |
https://github.com/huggingface/datasets/pull/7565
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7565",
"html_url": "https://github.com/huggingface/datasets/pull/7565",
"diff_url": "https://github.com/huggingface/datasets/pull/7565.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7565.patch",
"merged_at": null
}
|
Samoed
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7565). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@lhoestq Can you review, please? I don't think that errors in CI are related to my changes"
] |
3,049,275,226
| 7,564
|
Implementation of iteration over values of a column in an IterableDataset object
|
closed
| 2025-05-08T14:59:22
| 2025-05-19T12:15:02
| 2025-05-19T12:15:02
|
https://github.com/huggingface/datasets/pull/7564
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7564",
"html_url": "https://github.com/huggingface/datasets/pull/7564",
"diff_url": "https://github.com/huggingface/datasets/pull/7564.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7564.patch",
"merged_at": "2025-05-19T12:15:02"
}
|
TopCoder2K
| true
|
[
"A couple of questions:\r\n1. I've noticed two strange things: 1) \"Around 80% of the final dataset is made of the `en_dataset`\" in https://huggingface.co/docs/datasets/stream, 2) \"Click on \"Pull request\" to send your to the project maintainers\" in https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md Are `en_dataset` and \"your [???]\" typos? If so, I can fix them in this PR.\r\n2. Should I update https://huggingface.co/docs/datasets/stream or https://huggingface.co/docs/datasets/access#iterabledataset to include the new feature?",
"Great ! and chained indexing was easy indeed, thanks :)\r\n\r\nregarding your questions:\r\n\r\n> I've noticed two strange things: 1) \"Around 80% of the final dataset is made of the en_dataset\" in https://huggingface.co/docs/datasets/stream, 2) \"Click on \"Pull request\" to send your to the project maintainers\" in https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md Are en_dataset and \"your [???]\" typos? If so, I can fix them in this PR.\r\n\r\nOh good catch, both should be fixed indeed. Feel free to open a new PR for those docs fixes\r\n\r\n> Should I update https://huggingface.co/docs/datasets/stream or https://huggingface.co/docs/datasets/access#iterabledataset to include the new feature?\r\n\r\nYep good idea, I think in both places, since /stream is supposed to be exhaustive, and /access already mentions accessing a specific column for `Dataset`",
"@lhoestq, thank you for the answers!\r\n\r\n> Yep good idea, I think in both places, since /stream is supposed to be exhaustive, and /access already mentions accessing a specific column for Dataset\r\n\r\n👍, I'll try to add something.\r\n\r\nBy the way, do you have any ideas about why the CI pipelines have failed? Essentially, I've already encountered these problems [here](https://github.com/huggingface/datasets/issues/7381#issuecomment-2863421974).\r\nI think `check_code_quality` has failed due to the usage of `pre-commit`. The problem seems to be the old version of the ruff hook. I've tried `v0.11.8` (the one that was installed with `pip install -e \".[quality]\"`) and `pre-commit` seems to work like `make style` now. However, I don't have any ideas about `pyav` since I don't know what it is...",
"I've updated /stream and /access, please check the style and clarity. By the way, I would like to add `IterableDataset.skip` near `IterableDataset.take` to mimic [slicing](https://huggingface.co/docs/datasets/access/#slicing). What do you think?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7564). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,046,351,253
| 7,563
|
set dev version
|
closed
| 2025-05-07T15:18:29
| 2025-05-07T15:21:05
| 2025-05-07T15:18:36
|
https://github.com/huggingface/datasets/pull/7563
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7563",
"html_url": "https://github.com/huggingface/datasets/pull/7563",
"diff_url": "https://github.com/huggingface/datasets/pull/7563.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7563.patch",
"merged_at": "2025-05-07T15:18:36"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7563). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,046,339,430
| 7,562
|
release: 3.6.0
|
closed
| 2025-05-07T15:15:13
| 2025-05-07T15:17:46
| 2025-05-07T15:15:21
|
https://github.com/huggingface/datasets/pull/7562
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7562",
"html_url": "https://github.com/huggingface/datasets/pull/7562",
"diff_url": "https://github.com/huggingface/datasets/pull/7562.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7562.patch",
"merged_at": "2025-05-07T15:15:20"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7562). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,046,302,653
| 7,561
|
NotImplementedError: <class 'datasets.iterable_dataset.RepeatExamplesIterable'> doesn't implement num_shards yet
|
closed
| 2025-05-07T15:05:42
| 2025-06-05T12:41:30
| 2025-06-05T12:41:30
|
https://github.com/huggingface/datasets/issues/7561
| null |
cyanic-selkie
| false
|
[] |
3,046,265,500
| 7,560
|
fix decoding tests
|
closed
| 2025-05-07T14:56:14
| 2025-05-07T14:59:02
| 2025-05-07T14:56:20
|
https://github.com/huggingface/datasets/pull/7560
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7560",
"html_url": "https://github.com/huggingface/datasets/pull/7560",
"diff_url": "https://github.com/huggingface/datasets/pull/7560.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7560.patch",
"merged_at": "2025-05-07T14:56:20"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7560). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,046,177,078
| 7,559
|
fix aiohttp import
|
closed
| 2025-05-07T14:31:32
| 2025-05-07T14:34:34
| 2025-05-07T14:31:38
|
https://github.com/huggingface/datasets/pull/7559
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7559",
"html_url": "https://github.com/huggingface/datasets/pull/7559",
"diff_url": "https://github.com/huggingface/datasets/pull/7559.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7559.patch",
"merged_at": "2025-05-07T14:31:38"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7559). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,046,066,628
| 7,558
|
fix regression
|
closed
| 2025-05-07T13:56:03
| 2025-05-07T13:58:52
| 2025-05-07T13:56:18
|
https://github.com/huggingface/datasets/pull/7558
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7558",
"html_url": "https://github.com/huggingface/datasets/pull/7558",
"diff_url": "https://github.com/huggingface/datasets/pull/7558.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7558.patch",
"merged_at": "2025-05-07T13:56:18"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7558). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,045,962,076
| 7,557
|
check for empty _formatting
|
closed
| 2025-05-07T13:22:37
| 2025-05-07T13:57:12
| 2025-05-07T13:57:12
|
https://github.com/huggingface/datasets/pull/7557
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7557",
"html_url": "https://github.com/huggingface/datasets/pull/7557",
"diff_url": "https://github.com/huggingface/datasets/pull/7557.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7557.patch",
"merged_at": null
}
|
winglian
| true
|
[
"Thanks for reporting and for the fix ! I tried to reorganize the condition in your PR but didn't get the right permission so. I ended up merging https://github.com/huggingface/datasets/pull/7558 directly so I can make a release today - I hope you don't mind"
] |
3,043,615,210
| 7,556
|
Add `--merge-pull-request` option for `convert_to_parquet`
|
closed
| 2025-05-06T18:05:05
| 2025-07-18T19:09:10
| 2025-07-18T19:09:10
|
https://github.com/huggingface/datasets/pull/7556
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7556",
"html_url": "https://github.com/huggingface/datasets/pull/7556",
"diff_url": "https://github.com/huggingface/datasets/pull/7556.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7556.patch",
"merged_at": null
}
|
klamike
| true
|
[
"This is ready for a review, happy to make any changes. The main question for maintainers is how this should interact with #7555. If my suggestion there is accepted, this PR can be kept as is. If not, more changes are required to merge all the PR parts.",
"Closing since convert to parquet has been removed... https://github.com/huggingface/datasets/pull/7592#issuecomment-3073053138"
] |
3,043,089,844
| 7,554
|
datasets downloads and generates all splits, even though a single split is requested (for dataset with loading script)
|
closed
| 2025-05-06T14:43:38
| 2025-05-07T14:53:45
| 2025-05-07T14:53:44
|
https://github.com/huggingface/datasets/issues/7554
| null |
sei-eschwartz
| false
|
[
"Hi ! there has been some effort on allowing to download only a subset of splits in https://github.com/huggingface/datasets/pull/6832 but no one has been continuing this work so far. This would be a welcomed contribution though\n\nAlso note that loading script are often unoptimized, and we recommend using datasets in standard formats like Parquet instead.\n\nBtw there is a CLI tool to convert a loading script to parquet:\n\n```\ndatasets-cli convert_to_parquet <dataset-name> --trust_remote_code\n```",
"Closing in favor of #6832 "
] |
3,042,953,907
| 7,553
|
Rebatch arrow iterables before formatted iterable
|
closed
| 2025-05-06T13:59:58
| 2025-05-07T13:17:41
| 2025-05-06T14:03:42
|
https://github.com/huggingface/datasets/pull/7553
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7553",
"html_url": "https://github.com/huggingface/datasets/pull/7553",
"diff_url": "https://github.com/huggingface/datasets/pull/7553.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7553.patch",
"merged_at": "2025-05-06T14:03:41"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7553). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@lhoestq Our CI found an issue with this changeset causing a regression with shuffling iterable datasets \r\n<img width=\"884\" alt=\"Screenshot 2025-05-07 at 9 16 52 AM\" src=\"https://github.com/user-attachments/assets/bf7d9c7e-cc14-47da-8da6-d1a345992d7c\" />\r\n"
] |
3,040,258,084
| 7,552
|
Enable xet in push to hub
|
closed
| 2025-05-05T17:02:09
| 2025-05-06T12:42:51
| 2025-05-06T12:42:48
|
https://github.com/huggingface/datasets/pull/7552
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7552",
"html_url": "https://github.com/huggingface/datasets/pull/7552",
"diff_url": "https://github.com/huggingface/datasets/pull/7552.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7552.patch",
"merged_at": "2025-05-06T12:42:48"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7552). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,038,114,928
| 7,551
|
Issue with offline mode and partial dataset cached
|
open
| 2025-05-04T16:49:37
| 2025-05-13T03:18:43
| null |
https://github.com/huggingface/datasets/issues/7551
| null |
nrv
| false
|
[
"It seems the problem comes from builder.py / create_config_id()\n\nOn the first call, when the cache is empty we have\n```\nconfig_kwargs = {'data_files': {'train': ['hf://datasets/uonlp/CulturaX@6a8734bc69fefcbb7735f4f9250f43e4cd7a442e/fr/fr_part_00038.parquet']}}\n```\nleading to config_id beeing 'default-2935e8cdcc21c613'\n\nthen, on the second call, \n```\nconfig_kwargs = {'data_files': 'fr/fr_part_00038.parquet'}\n```\nthus explaining why the hash is not the same, despite having the same parameter when calling load_dataset : data_files=\"fr/fr_part_00038.parquet\"",
"Same behavior with version 3.5.1",
"Same issue when loading `google/IndicGenBench_flores_in` with `dataset==2.21.0` and `dataset==3.6.0` .",
"\n\n\n> It seems the problem comes from builder.py / create_config_id()\n> \n> On the first call, when the cache is empty we have\n> \n> ```\n> config_kwargs = {'data_files': {'train': ['hf://datasets/uonlp/CulturaX@6a8734bc69fefcbb7735f4f9250f43e4cd7a442e/fr/fr_part_00038.parquet']}}\n> ```\n> \n> leading to config_id beeing 'default-2935e8cdcc21c613'\n> \n> then, on the second call,\n> \n> ```\n> config_kwargs = {'data_files': 'fr/fr_part_00038.parquet'}\n> ```\n> \n> thus explaining why the hash is not the same, despite having the same parameter when calling load_dataset : data_files=\"fr/fr_part_00038.parquet\"\n\n\nI have identified that the issue indeed lies in the `data_files` within `config_kwargs`. \nThe format and prefix of `data_files` differ depending on whether `HF_HUB_OFFLINE` is set, leading to different final `config_id` values. \nWhen I use other datasets without passing the `data_files` parameter, this issue does not occur.\n\nA possible solution might be to standardize the formatting of `data_files` within the `create_config_id` function."
] |
3,037,017,367
| 7,550
|
disable aiohttp depend for python 3.13t free-threading compat
|
closed
| 2025-05-03T00:28:18
| 2025-05-03T00:28:24
| 2025-05-03T00:28:24
|
https://github.com/huggingface/datasets/pull/7550
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7550",
"html_url": "https://github.com/huggingface/datasets/pull/7550",
"diff_url": "https://github.com/huggingface/datasets/pull/7550.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7550.patch",
"merged_at": null
}
|
Qubitium
| true
|
[] |
3,036,272,015
| 7,549
|
TypeError: Couldn't cast array of type string to null on webdataset format dataset
|
open
| 2025-05-02T15:18:07
| 2025-05-02T15:37:05
| null |
https://github.com/huggingface/datasets/issues/7549
| null |
narugo1992
| false
|
[
"seems to get fixed by explicitly adding `dataset_infos.json` like this\n\n```json\n{\n \"default\": {\n \"description\": \"Image dataset with tags and ratings\",\n \"citation\": \"\",\n \"homepage\": \"\",\n \"license\": \"\",\n \"features\": {\n \"image\": {\n \"dtype\": \"image\",\n \"_type\": \"Image\"\n },\n \"json\": {\n \"id\": {\n \"dtype\": \"int32\",\n \"_type\": \"Value\"\n },\n \"width\": {\n \"dtype\": \"int32\",\n \"_type\": \"Value\"\n },\n \"height\": {\n \"dtype\": \"int32\",\n \"_type\": \"Value\"\n },\n \"rating\": {\n \"feature\": {\n \"dtype\": \"string\",\n \"_type\": \"Value\"\n },\n \"_type\": \"Sequence\"\n },\n \"general_tags\": {\n \"feature\": {\n \"dtype\": \"string\",\n \"_type\": \"Value\"\n },\n \"_type\": \"Sequence\"\n },\n \"character_tags\": {\n \"feature\": {\n \"dtype\": \"string\",\n \"_type\": \"Value\"\n },\n \"_type\": \"Sequence\"\n }\n }\n },\n \"builder_name\": \"webdataset\",\n \"config_name\": \"default\",\n \"version\": {\n \"version_str\": \"1.0.0\",\n \"description\": null,\n \"major\": 1,\n \"minor\": 0,\n \"patch\": 0\n }\n }\n}\n\n```\n\nwill close this issue if no further issues found"
] |
3,035,568,851
| 7,548
|
Python 3.13t (free threads) Compat
|
open
| 2025-05-02T09:20:09
| 2025-05-12T15:11:32
| null |
https://github.com/huggingface/datasets/issues/7548
| null |
Qubitium
| false
|
[
"Update: `datasets` use `aiohttp` for data streaming and from what I understand data streaming is useful for large datasets that do not fit in memory and/or multi-modal datasets like image/audio where you only what the actual binary bits to fed in as needed. \n\nHowever, there are also many cases where aiohttp will never be used. Text datasets that are not huge, relative to machine spec, and non-multi-modal datasets. \n\nGetting `aiohttp` fixed for `free threading` appeals to be a large task that is not going to be get done in a quick manner. It may be faster to make `aiohttp` optional and not forced build. Otherwise, testing python 3.13t is going to be a painful install. \n\nI have created a fork/branch that temp disables aiohttp import so non-streaming usage of datasets can be tested under python 3.13.t:\n\nhttps://github.com/Qubitium/datasets/tree/disable-aiohttp-depend",
"We are mostly relying on `huggingface_hub` which uses `requests` to stream files from Hugging Face, so maybe we can move aiohttp to optional dependencies now. Would it solve your issue ? Btw what do you think of `datasets` in the free-threading setting ?",
"> We are mostly relying on `huggingface_hub` which uses `requests` to stream files from Hugging Face, so maybe we can move aiohttp to optional dependencies now. Would it solve your issue ? Btw what do you think of `datasets` in the free-threading setting ?\n\nI am testing transformers + dataset (simple text dataset usage) + GPTQModel for quantization and there were no issues encountered with python 3.13t but my test-case is the base-bare minimal test-case since dataset is not sharded, fully in-memory, text-only, small, not used for training. \n\nOn the technical side, dataset is almost always 100% read-only so there should be zero locking issues but I have not checked the dataset internals so there may be cases where streaming, sharding, and/or cases where datset memory/states are updated needs a per dataset `threading.lock`. \n\nSo yes, making `aiohttp` optional will definitely solve my issue. There is also a companion (datasets and tokenizers usually go hand-in-hand) issue with `Tokenizers` as well but that's simple enough with package version update: https://github.com/huggingface/tokenizers/pull/1774\n",
"Ok I see ! Anyway feel free to edit the setup.py to move aiohttp to optional (tests) dependencies and open a PR, we can run the CI to see if it's ok as a change",
"actually there is https://github.com/huggingface/datasets/pull/7294/ already, let's see if we can merge it",
"wouldn't it be the good reason to switch to `httpx`? 😄 (would require slightly more work, short term agree with https://github.com/huggingface/datasets/issues/7548#issuecomment-2854405923)",
"I made `aiohttp` optional in `datasets` 3.6.0 :)\n\n`datasets` doesn't use it directly anyway, it's only used when someone wants to download files from HTTP URLs outside of HF"
] |
3,034,830,291
| 7,547
|
Avoid global umask for setting file mode.
|
closed
| 2025-05-01T22:24:24
| 2025-05-06T13:05:00
| 2025-05-06T13:05:00
|
https://github.com/huggingface/datasets/pull/7547
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7547",
"html_url": "https://github.com/huggingface/datasets/pull/7547",
"diff_url": "https://github.com/huggingface/datasets/pull/7547.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7547.patch",
"merged_at": "2025-05-06T13:05:00"
}
|
ryan-clancy
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7547). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,034,018,298
| 7,546
|
Large memory use when loading large datasets to a ZFS pool
|
closed
| 2025-05-01T14:43:47
| 2025-05-13T13:30:09
| 2025-05-13T13:29:53
|
https://github.com/huggingface/datasets/issues/7546
| null |
FredHaa
| false
|
[
"Hi ! datasets are memory mapped from disk, so they don't fill out your RAM. Not sure what's the source of your memory issue.\n\nWhat kind of system are you using ? and what kind of disk ?",
"Well, the fact of the matter is that my RAM is getting filled out by running the given example, as shown in [this video](https://streamable.com/usb0ql).\n\nMy system is a GPU server running Ubuntu. The disk is a SATA SSD attached to the server using a backplane. It is formatted with ZFS, mounted in /cache, and my HF_HOME is set to /cache/hf\n\nI really need this fixed, so I am more than willing to test out various suggestions you might have, or write a PR if we can figure out what is going on.",
"I'm not super familiar with ZFS, but it looks like it loads the data in memory when the files are memory mapped, which is an issue.\n\nMaybe it's a caching mechanism ? Since `datasets` accesses every memory mapped file to read a small part (the metadata of the arrow record batches), maybe ZFS brings the whole files in memory for quicker subsequent reads. This is an antipattern when it comes to lazy loading datasets of that size though",
"This is the answer.\n\nI tried changing my HF_HOME to an NFS share, and no RAM is then consumed loading the dataset.\n\nI will try to see if I can find a way to configure the ZFS pool to not cache the files (disabling the ARC/primary cache didn't work), and if I do write the solution in this issue. If I can't I guess I have to reformat my cache drive."
] |
3,031,617,547
| 7,545
|
Networked Pull Through Cache
|
open
| 2025-04-30T15:16:33
| 2025-04-30T15:16:33
| null |
https://github.com/huggingface/datasets/issues/7545
| null |
wrmedford
| false
|
[] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.