id
int64 953M
3.35B
| number
int64 2.72k
7.75k
| title
stringlengths 1
290
| state
stringclasses 2
values | created_at
timestamp[s]date 2021-07-26 12:21:17
2025-08-23 00:18:43
| updated_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-23 12:34:39
| closed_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-20 16:35:55
⌀ | html_url
stringlengths 49
51
| pull_request
dict | user_login
stringlengths 3
26
| is_pull_request
bool 2
classes | comments
listlengths 0
30
|
|---|---|---|---|---|---|---|---|---|---|---|---|
3,347,137,663
| 7,748
|
docs: Streaming best practices
|
open
| 2025-08-23T00:18:43
| 2025-08-23T00:18:43
| null |
https://github.com/huggingface/datasets/pull/7748
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7748",
"html_url": "https://github.com/huggingface/datasets/pull/7748",
"diff_url": "https://github.com/huggingface/datasets/pull/7748.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7748.patch",
"merged_at": null
}
|
Abdul-Omira
| true
|
[] |
3,347,098,038
| 7,747
|
Add wikipedia-2023-redirects dataset
|
open
| 2025-08-22T23:49:53
| 2025-08-22T23:49:53
| null |
https://github.com/huggingface/datasets/pull/7747
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7747",
"html_url": "https://github.com/huggingface/datasets/pull/7747",
"diff_url": "https://github.com/huggingface/datasets/pull/7747.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7747.patch",
"merged_at": null
}
|
Abdul-Omira
| true
|
[] |
3,345,391,211
| 7,746
|
Fix: Canonical 'multi_news' dataset is broken and should be updated to a Parquet version
|
open
| 2025-08-22T12:52:03
| 2025-08-23T12:34:39
| null |
https://github.com/huggingface/datasets/issues/7746
| null |
Awesome075
| false
|
[] |
3,345,286,773
| 7,745
|
Audio mono argument no longer supported, despite class documentation
|
open
| 2025-08-22T12:15:41
| 2025-08-22T12:15:41
| null |
https://github.com/huggingface/datasets/issues/7745
| null |
jheitz
| false
|
[] |
3,343,510,686
| 7,744
|
dtype: ClassLabel is not parsed correctly in `features.py`
|
open
| 2025-08-21T23:28:50
| 2025-08-21T23:28:50
| null |
https://github.com/huggingface/datasets/issues/7744
| null |
cmatKhan
| false
|
[] |
3,342,611,297
| 7,743
|
Refactor HDF5 and preserve tree structure
|
open
| 2025-08-21T17:28:17
| 2025-08-22T02:21:09
| null |
https://github.com/huggingface/datasets/pull/7743
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7743",
"html_url": "https://github.com/huggingface/datasets/pull/7743",
"diff_url": "https://github.com/huggingface/datasets/pull/7743.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7743.patch",
"merged_at": null
}
|
klamike
| true
|
[] |
3,336,704,928
| 7,742
|
module 'pyarrow' has no attribute 'PyExtensionType'
|
open
| 2025-08-20T06:14:33
| 2025-08-20T06:23:47
| null |
https://github.com/huggingface/datasets/issues/7742
| null |
mnedelko
| false
|
[
"Just checked out the files and thishad already been addressed"
] |
3,334,848,656
| 7,741
|
Preserve tree structure when loading HDF5
|
open
| 2025-08-19T15:42:05
| 2025-08-22T00:41:46
| null |
https://github.com/huggingface/datasets/issues/7741
| null |
klamike
| false
|
[] |
3,334,693,293
| 7,740
|
Document HDF5 support
|
open
| 2025-08-19T14:53:04
| 2025-08-21T19:56:58
| null |
https://github.com/huggingface/datasets/pull/7740
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7740",
"html_url": "https://github.com/huggingface/datasets/pull/7740",
"diff_url": "https://github.com/huggingface/datasets/pull/7740.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7740.patch",
"merged_at": null
}
|
klamike
| true
|
[] |
3,331,537,762
| 7,739
|
Replacement of "Sequence" feature with "List" breaks backward compatibility
|
open
| 2025-08-18T17:28:38
| 2025-08-18T17:28:38
| null |
https://github.com/huggingface/datasets/issues/7739
| null |
evmaki
| false
|
[] |
3,328,948,690
| 7,738
|
Allow saving multi-dimensional ndarray with dynamic shapes
|
open
| 2025-08-18T02:23:51
| 2025-08-22T03:15:19
| null |
https://github.com/huggingface/datasets/issues/7738
| null |
ryan-minato
| false
|
[
"I agree this would be super valuable.\n\nIt looks like this was discussed a few years ago in https://github.com/huggingface/datasets/issues/5272#issuecomment-1550200824 but there were some issues. Those PRs are merged now and it looks like Arrow [officially supports](https://arrow.apache.org/docs/format/CanonicalExtensions.html#variable-shape-tensor) this so it's a good time to re-evaluate!"
] |
3,318,670,801
| 7,737
|
docs: Add column overwrite example to batch mapping guide
|
open
| 2025-08-13T14:20:19
| 2025-08-13T14:20:19
| null |
https://github.com/huggingface/datasets/pull/7737
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7737",
"html_url": "https://github.com/huggingface/datasets/pull/7737",
"diff_url": "https://github.com/huggingface/datasets/pull/7737.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7737.patch",
"merged_at": null
}
|
Sanjaykumar030
| true
|
[] |
3,311,618,096
| 7,736
|
Fix type hint `train_test_split`
|
closed
| 2025-08-11T20:46:53
| 2025-08-13T13:13:50
| 2025-08-13T13:13:48
|
https://github.com/huggingface/datasets/pull/7736
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7736",
"html_url": "https://github.com/huggingface/datasets/pull/7736",
"diff_url": "https://github.com/huggingface/datasets/pull/7736.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7736.patch",
"merged_at": "2025-08-13T13:13:48"
}
|
qgallouedec
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7736). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,310,514,828
| 7,735
|
fix largelist repr
|
closed
| 2025-08-11T15:17:42
| 2025-08-11T15:39:56
| 2025-08-11T15:39:54
|
https://github.com/huggingface/datasets/pull/7735
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7735",
"html_url": "https://github.com/huggingface/datasets/pull/7735",
"diff_url": "https://github.com/huggingface/datasets/pull/7735.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7735.patch",
"merged_at": "2025-08-11T15:39:54"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7735). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,306,519,239
| 7,734
|
Fixing __getitem__ of datasets which behaves inconsistent to documentation when setting _format_type to None
|
closed
| 2025-08-09T15:52:54
| 2025-08-17T07:23:00
| 2025-08-17T07:23:00
|
https://github.com/huggingface/datasets/pull/7734
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7734",
"html_url": "https://github.com/huggingface/datasets/pull/7734",
"diff_url": "https://github.com/huggingface/datasets/pull/7734.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7734.patch",
"merged_at": null
}
|
awagen
| true
|
[
"this breaking change is actually expected, happy to help with a fix in sentencetransformers to account for this",
"Thank you for the context. I thought this was a mismatch do the documentation. Good to know it was intentional. No worries, can add a PR to sentence transformers."
] |
3,304,979,299
| 7,733
|
Dataset Repo Paths to Locally Stored Images Not Being Appended to Image Path
|
open
| 2025-08-08T19:10:58
| 2025-08-12T00:54:58
| null |
https://github.com/huggingface/datasets/issues/7733
| null |
dennys246
| false
|
[
"This is the download issues I come into, about ever other time it fails...\n<img width=\"1719\" height=\"1226\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/2e5b4b3e-7c13-4bad-a77c-34b47a932831\" />"
] |
3,304,673,383
| 7,732
|
webdataset: key errors when `field_name` has upper case characters
|
open
| 2025-08-08T16:56:42
| 2025-08-08T16:56:42
| null |
https://github.com/huggingface/datasets/issues/7732
| null |
YassineYousfi
| false
|
[] |
3,303,637,075
| 7,731
|
Add the possibility of a backend for audio decoding
|
open
| 2025-08-08T11:08:56
| 2025-08-20T16:29:33
| null |
https://github.com/huggingface/datasets/issues/7731
| null |
intexcor
| false
|
[
"is there a work around im stuck",
"never mind just downgraded"
] |
3,301,907,242
| 7,730
|
Grammar fix: correct "showed" to "shown" in fingerprint.py
|
closed
| 2025-08-07T21:22:56
| 2025-08-13T18:34:30
| 2025-08-13T13:12:56
|
https://github.com/huggingface/datasets/pull/7730
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7730",
"html_url": "https://github.com/huggingface/datasets/pull/7730",
"diff_url": "https://github.com/huggingface/datasets/pull/7730.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7730.patch",
"merged_at": "2025-08-13T13:12:56"
}
|
brchristian
| true
|
[] |
3,300,672,954
| 7,729
|
OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory
|
open
| 2025-08-07T14:07:23
| 2025-08-07T14:07:23
| null |
https://github.com/huggingface/datasets/issues/7729
| null |
SaleemMalikAI
| false
|
[] |
3,298,854,904
| 7,728
|
NonMatchingSplitsSizesError and ExpectedMoreSplitsError
|
open
| 2025-08-07T04:04:50
| 2025-08-07T07:31:47
| null |
https://github.com/huggingface/datasets/issues/7728
| null |
efsotr
| false
|
[] |
3,295,718,578
| 7,727
|
config paths that start with ./ are not valid as hf:// accessed repos, but are valid when accessed locally
|
open
| 2025-08-06T08:21:37
| 2025-08-06T08:21:37
| null |
https://github.com/huggingface/datasets/issues/7727
| null |
doctorpangloss
| false
|
[] |
3,293,789,832
| 7,726
|
fix(webdataset): don't .lower() field_name
|
closed
| 2025-08-05T16:57:09
| 2025-08-20T16:35:55
| 2025-08-20T16:35:55
|
https://github.com/huggingface/datasets/pull/7726
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7726",
"html_url": "https://github.com/huggingface/datasets/pull/7726",
"diff_url": "https://github.com/huggingface/datasets/pull/7726.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7726.patch",
"merged_at": "2025-08-20T16:35:55"
}
|
YassineYousfi
| true
|
[
"fixes: https://github.com/huggingface/datasets/issues/7732",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7726). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"CI failures are unrelated, merging :)"
] |
3,292,315,241
| 7,724
|
Can not stepinto load_dataset.py?
|
open
| 2025-08-05T09:28:51
| 2025-08-05T09:28:51
| null |
https://github.com/huggingface/datasets/issues/7724
| null |
micklexqg
| false
|
[] |
3,289,943,261
| 7,723
|
Don't remove `trust_remote_code` arg!!!
|
open
| 2025-08-04T15:42:07
| 2025-08-04T15:42:07
| null |
https://github.com/huggingface/datasets/issues/7723
| null |
autosquid
| false
|
[] |
3,289,741,064
| 7,722
|
Out of memory even though using load_dataset(..., streaming=True)
|
open
| 2025-08-04T14:41:55
| 2025-08-04T14:41:55
| null |
https://github.com/huggingface/datasets/issues/7722
| null |
padmalcom
| false
|
[] |
3,289,426,104
| 7,721
|
Bad split error message when using percentages
|
open
| 2025-08-04T13:20:25
| 2025-08-14T14:42:24
| null |
https://github.com/huggingface/datasets/issues/7721
| null |
padmalcom
| false
|
[
"I'd like to work on this: add clearer validation/messages for percent-based splits + tests",
"The most basic example is this code:\n`load_dataset(\"openslr/librispeech_asr\", split=\"train[10%:20%]\")`\n\nThis results in this ValueError:\n```\n raise ValueError(f'Unknown split \"{split}\". Should be one of {list(name2len)}.')\nValueError: Unknown split \"train\". Should be one of ['test.clean', 'test.other', 'train.clean.100', 'train.clean.360', 'train.other.500', 'validation.clean', 'validation.other'].\n```\n"
] |
3,287,150,513
| 7,720
|
Datasets 4.0 map function causing column not found
|
open
| 2025-08-03T12:52:34
| 2025-08-07T19:23:34
| null |
https://github.com/huggingface/datasets/issues/7720
| null |
Darejkal
| false
|
[
"Hi, I tried to reproduce this issue on the latest `main` branch but it seems to be working correctly now. My test script (which creates a dummy dataset and applies the `.map()` function) successfully creates and accesses the new column without a `KeyError`.\n\nIt's possible this was fixed by a recent commit. The maintainers might want to consider closing this issue.",
"Hi, have you tried on a large dataset (200GB+) perhaps? I will try my best to do a rerun with main branch when I have the time.",
"I ran it on a small dataset, maybe that’s why I didn’t hit the issue. If it still shows up on your side with the latest main, let me know. I can try it on a bigger set too."
] |
3,285,928,491
| 7,719
|
Specify dataset columns types in typehint
|
open
| 2025-08-02T13:22:31
| 2025-08-02T13:22:31
| null |
https://github.com/huggingface/datasets/issues/7719
| null |
Samoed
| false
|
[] |
3,284,221,177
| 7,718
|
add support for pyarrow string view in features
|
open
| 2025-08-01T14:58:39
| 2025-08-13T13:09:44
| null |
https://github.com/huggingface/datasets/pull/7718
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7718",
"html_url": "https://github.com/huggingface/datasets/pull/7718",
"diff_url": "https://github.com/huggingface/datasets/pull/7718.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7718.patch",
"merged_at": null
}
|
onursatici
| true
|
[
"@lhoestq who do you think would be the best to have a look at this? Any pointers would be appreciated, thanks!"
] |
3,282,855,127
| 7,717
|
Cached dataset is not used when explicitly passing the cache_dir parameter
|
open
| 2025-08-01T07:12:41
| 2025-08-05T19:19:36
| null |
https://github.com/huggingface/datasets/issues/7717
| null |
padmalcom
| false
|
[
"Hi, I've investigated this issue and can confirm the bug. Here are my findings:\n\n**1. Reproduction:**\nI was able to reproduce the issue on the latest `main` branch. Using the provided code snippet, `snapshot_download` correctly populates the custom `cache_dir`, but `load_dataset` with the same `cache_dir` triggers a full re-download and re-processing of the dataset, ignoring the existing cache.\n\n**2. Investigation:**\nI traced the `cache_dir` parameter from `load_dataset` down to the `DatasetBuilder` class in `src/datasets/builder.py`. The root cause seems to be a mismatch between the cache path structure created by `snapshot_download` and the path structure expected by the `DatasetBuilder`.\n\nSpecifically, the `_relative_data_dir` method in `DatasetBuilder` constructs a path using `namespace___dataset_name` (with three underscores), while the cache from `snapshot_download` appears to use a `repo_id` based format like `datasets--namespace--dataset_name` (with double hyphens).\n\n**3. Attempted Fix & Result:**\nI attempted a fix by modifying the `_relative_data_dir` method to replace the path separator \"/\" in `self.repo_id` with \"--\", to align it with the `snapshot_download` structure.\n\nThis partially worked: `load_dataset` no longer re-downloads the files. However, it still re-processes them every time (triggering \"Generating train split...\", etc.) instead of loading the already processed Arrow files from the cache.\n\nThis suggests the issue is deeper than just the directory name and might be related to how the builder verifies the integrity or presence of the processed cache files.\n\nI hope these findings are helpful for whoever picks up this issue."
] |
3,281,204,362
| 7,716
|
typo
|
closed
| 2025-07-31T17:14:45
| 2025-07-31T17:17:15
| 2025-07-31T17:14:51
|
https://github.com/huggingface/datasets/pull/7716
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7716",
"html_url": "https://github.com/huggingface/datasets/pull/7716",
"diff_url": "https://github.com/huggingface/datasets/pull/7716.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7716.patch",
"merged_at": "2025-07-31T17:14:51"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7716). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,281,189,955
| 7,715
|
Docs: Use Image(mode="F") for PNG/JPEG depth maps
|
closed
| 2025-07-31T17:09:49
| 2025-07-31T17:12:23
| 2025-07-31T17:10:10
|
https://github.com/huggingface/datasets/pull/7715
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7715",
"html_url": "https://github.com/huggingface/datasets/pull/7715",
"diff_url": "https://github.com/huggingface/datasets/pull/7715.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7715.patch",
"merged_at": "2025-07-31T17:10:10"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7715). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,281,090,499
| 7,714
|
fix num_proc=1 ci test
|
closed
| 2025-07-31T16:36:32
| 2025-07-31T16:39:03
| 2025-07-31T16:38:03
|
https://github.com/huggingface/datasets/pull/7714
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7714",
"html_url": "https://github.com/huggingface/datasets/pull/7714",
"diff_url": "https://github.com/huggingface/datasets/pull/7714.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7714.patch",
"merged_at": "2025-07-31T16:38:03"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7714). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,280,813,699
| 7,713
|
Update cli.mdx to refer to the new "hf" CLI
|
closed
| 2025-07-31T15:06:11
| 2025-07-31T16:37:56
| 2025-07-31T16:37:55
|
https://github.com/huggingface/datasets/pull/7713
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7713",
"html_url": "https://github.com/huggingface/datasets/pull/7713",
"diff_url": "https://github.com/huggingface/datasets/pull/7713.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7713.patch",
"merged_at": "2025-07-31T16:37:55"
}
|
evalstate
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7713). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,280,706,762
| 7,712
|
Retry intermediate commits too
|
closed
| 2025-07-31T14:33:33
| 2025-07-31T14:37:43
| 2025-07-31T14:36:43
|
https://github.com/huggingface/datasets/pull/7712
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7712",
"html_url": "https://github.com/huggingface/datasets/pull/7712",
"diff_url": "https://github.com/huggingface/datasets/pull/7712.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7712.patch",
"merged_at": "2025-07-31T14:36:43"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7712). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,280,471,353
| 7,711
|
Update dataset_dict push_to_hub
|
closed
| 2025-07-31T13:25:03
| 2025-07-31T14:18:55
| 2025-07-31T14:18:53
|
https://github.com/huggingface/datasets/pull/7711
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7711",
"html_url": "https://github.com/huggingface/datasets/pull/7711",
"diff_url": "https://github.com/huggingface/datasets/pull/7711.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7711.patch",
"merged_at": "2025-07-31T14:18:53"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7711). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,279,878,230
| 7,710
|
Concurrent IterableDataset push_to_hub
|
closed
| 2025-07-31T10:11:31
| 2025-07-31T10:14:00
| 2025-07-31T10:12:52
|
https://github.com/huggingface/datasets/pull/7710
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7710",
"html_url": "https://github.com/huggingface/datasets/pull/7710",
"diff_url": "https://github.com/huggingface/datasets/pull/7710.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7710.patch",
"merged_at": "2025-07-31T10:12:52"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7710). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,276,677,990
| 7,709
|
Release 4.0.0 breaks usage patterns of with_format
|
closed
| 2025-07-30T11:34:53
| 2025-08-07T08:27:18
| 2025-08-07T08:27:18
|
https://github.com/huggingface/datasets/issues/7709
| null |
wittenator
| false
|
[
"This is a breaking change with 4.0 which introduced `Column` objects. To get the numpy array from a `Column` you can `col[i]`, `col[i:j]` or even `col[:]` if you want the full column as a numpy array:\n\n```python\nfrom datasets import load_dataset\ndataset = load_dataset(...)\ndataset = dataset.with_format(\"numpy\")\nprint(dataset[\"star\"][:].ndim)\n```",
"Ah perfect, thanks for clearing this up. I would close this ticket then."
] |
3,273,614,584
| 7,708
|
Concurrent push_to_hub
|
closed
| 2025-07-29T13:14:30
| 2025-07-31T10:00:50
| 2025-07-31T10:00:49
|
https://github.com/huggingface/datasets/pull/7708
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7708",
"html_url": "https://github.com/huggingface/datasets/pull/7708",
"diff_url": "https://github.com/huggingface/datasets/pull/7708.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7708.patch",
"merged_at": "2025-07-31T10:00:49"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7708). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,271,867,998
| 7,707
|
load_dataset() in 4.0.0 failed when decoding audio
|
closed
| 2025-07-29T03:25:03
| 2025-08-01T05:15:45
| 2025-08-01T05:15:45
|
https://github.com/huggingface/datasets/issues/7707
| null |
jiqing-feng
| false
|
[
"Hi @lhoestq . Would you please have a look at it? I use the official NV Docker ([NV official docker image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch): `nvcr.io/nvidia/pytorch:25.06-py3`) on A100 and encountered this issue, but I don't know how to fix it.",
"Use !pip install -U datasets[audio] rather than !pip install datasets\n\nI got the solution from this link [https://github.com/huggingface/datasets/issues/7678](https://github.com/huggingface/datasets/issues/7678), and it processes the data; however, it led to certain transformer importnerrors",
"> https://github.com/huggingface/datasets/issues/7678\n\nHi @asantewaa-bremang . Thanks for your reply, but sadly it does not work for me.",
"It looks like a torchcodec issue, have you tried to look at the torchcodec issues here in case someone has the same issue ? https://github.com/pytorch/torchcodec/issues\n\notherwise feel free to open a new issue there",
"@jiqing-feng, are you running the code on Colab? If you are, you should restart after making this installation ! pip install -U datasets[audio]. ",
"> [@jiqing-feng](https://github.com/jiqing-feng), are you running the code on Colab? If you are, you should restart after making this installation ! pip install -U datasets[audio].\n\nNo, I ran the script on the A100 instance locally.",
"> It looks like a torchcodec issue, have you tried to look at the torchcodec issues here in case someone has the same issue ? https://github.com/pytorch/torchcodec/issues\n> \n> otherwise feel free to open a new issue there\n\nThanks! I've opened a new issue on torchcodec. Could we have a fallback implementation without torchcodec (just like datasets==3.6.0) ?",
"> Thanks! I've opened a new issue on torchcodec. Could we have a fallback implementation without torchcodec (just like datasets==3.6.0) ?\n\nFor now I'd recommend using `datasets==3.6.0` if this issue is blocking for you",
"Resolved by installing the pre-release torchcodec. Thanks!"
] |
3,271,129,240
| 7,706
|
Reimplemented partial split download support (revival of #6832)
|
open
| 2025-07-28T19:40:40
| 2025-07-29T09:25:12
| null |
https://github.com/huggingface/datasets/pull/7706
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7706",
"html_url": "https://github.com/huggingface/datasets/pull/7706",
"diff_url": "https://github.com/huggingface/datasets/pull/7706.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7706.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[
" Mario’s Patch (in PR #6832):\r\n```\r\ndef _make_split_generators_kwargs(self, prepare_split_kwargs):\r\n # Pass `pipeline` into `_split_generators()` from `prepare_split_kwargs` if\r\n # it's in the call signature of `_split_generators()`.\r\n # This allows for global preprocessing in beam.\r\n split_generators_kwargs = {}\r\n if \"pipeline\" in inspect.signature(self._split_generators).parameters:\r\n split_generators_kwargs[\"pipeline\"] = prepare_split_kwargs[\"pipeline\"]\r\n split_generators_kwargs.update(super()._make_split_generators_kwargs(prepare_split_kwargs))\r\n return split_generators_kwargs\r\n```\r\n\r\nIn the latest main(in my fork and og repo's main):\r\n```\r\ndef _make_split_generators_kwargs(self, prepare_split_kwargs):\r\n \"\"\"Get kwargs for `self._split_generators()` from `prepare_split_kwargs`.\"\"\"\r\n splits = prepare_split_kwargs.pop(\"splits\", None)\r\n if self._supports_partial_generation():\r\n return {\"splits\": splits}\r\n return {}\r\n```\r\nIt enables passing splits into _split_generators() only for builders that support it(if i am not wrong..). So ignored Beam logic for now!"
] |
3,269,070,499
| 7,705
|
Can Not read installed dataset in dataset.load(.)
|
open
| 2025-07-28T09:43:54
| 2025-08-05T01:24:32
| null |
https://github.com/huggingface/datasets/issues/7705
| null |
HuangChiEn
| false
|
[
"You can download the dataset locally using [huggingface_hub.snapshot_download](https://huggingface.co/docs/huggingface_hub/v0.34.3/en/package_reference/file_download#huggingface_hub.snapshot_download) and then do\n\n```python\ndataset = load_dataset(local_directory_path)\n```",
"> You can download the dataset locally using [huggingface_hub.snapshot_download](https://huggingface.co/docs/huggingface_hub/v0.34.3/en/package_reference/file_download#huggingface_hub.snapshot_download) and then do\n> \n> dataset = load_dataset(local_directory_path)\n\nIt's good suggestion, but my server env is network restriction. It can not directly fetch data from huggingface. I spent lot of time to download and transfer it to the server.\nSo, I attempt to make load_dataset connect to my local dataset. ",
"Just Solved it few day before. Will post solution later...\nalso thanks folks quick reply.."
] |
3,265,730,177
| 7,704
|
Fix map() example in datasets documentation: define tokenizer before use
|
closed
| 2025-07-26T14:18:17
| 2025-08-13T13:23:18
| 2025-08-13T13:06:37
|
https://github.com/huggingface/datasets/pull/7704
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7704",
"html_url": "https://github.com/huggingface/datasets/pull/7704",
"diff_url": "https://github.com/huggingface/datasets/pull/7704.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7704.patch",
"merged_at": null
}
|
Sanjaykumar030
| true
|
[
"Hi @lhoestq, just a gentle follow-up on this doc fix PR (#7704). Let me know if any changes are needed — happy to update.\r\nHope this improvement helps users run the example without confusion!",
"the modified file is the readme of the docs, not about map() specifically"
] |
3,265,648,942
| 7,703
|
[Docs] map() example uses undefined `tokenizer` — causes NameError
|
open
| 2025-07-26T13:35:11
| 2025-07-27T09:44:35
| null |
https://github.com/huggingface/datasets/issues/7703
| null |
Sanjaykumar030
| false
|
[
"I've submitted PR #7704 which adds documentation to clarify the behavior of `map()` when returning `None`."
] |
3,265,328,549
| 7,702
|
num_proc=0 behave like None, num_proc=1 uses one worker (not main process) and clarify num_proc documentation
|
closed
| 2025-07-26T08:19:39
| 2025-07-31T14:52:33
| 2025-07-31T14:52:33
|
https://github.com/huggingface/datasets/pull/7702
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7702",
"html_url": "https://github.com/huggingface/datasets/pull/7702",
"diff_url": "https://github.com/huggingface/datasets/pull/7702.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7702.patch",
"merged_at": "2025-07-31T14:52:33"
}
|
tanuj-rai
| true
|
[
"I think we can support num_proc=0 and make it equivalent to `None` to make it simpler",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7702). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> I think we can support num_proc=0 and make it equivalent to `None` to make it simpler\r\n\r\nThank you @lhoestq for reviewing it. Please let me know if anything needs to be updated further."
] |
3,265,236,296
| 7,701
|
Update fsspec max version to current release 2025.7.0
|
closed
| 2025-07-26T06:47:59
| 2025-08-13T17:32:07
| 2025-07-28T11:58:11
|
https://github.com/huggingface/datasets/pull/7701
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7701",
"html_url": "https://github.com/huggingface/datasets/pull/7701",
"diff_url": "https://github.com/huggingface/datasets/pull/7701.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7701.patch",
"merged_at": "2025-07-28T11:58:11"
}
|
rootAvish
| true
|
[
"@lhoestq I ran the test suite locally and while some tests were failing those failures are present on the main branch too. Could you please review and trigger the CI?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7701). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Which release will this be available in ? I'm running into this issue with `datasets=3.6.0`"
] |
3,263,922,255
| 7,700
|
[doc] map.num_proc needs clarification
|
open
| 2025-07-25T17:35:09
| 2025-07-25T17:39:36
| null |
https://github.com/huggingface/datasets/issues/7700
| null |
sfc-gh-sbekman
| false
|
[] |
3,261,053,171
| 7,699
|
Broken link in documentation for "Create a video dataset"
|
open
| 2025-07-24T19:46:28
| 2025-07-25T15:27:47
| null |
https://github.com/huggingface/datasets/issues/7699
| null |
cleong110
| false
|
[
"The URL is ok but it seems the webdataset website is down. There seems to be a related issue here: https://github.com/webdataset/webdataset/issues/155\n\nFeel free to ask the authors there for an update. Otherwise happy to witch the link to the mirror shared in that issue"
] |
3,255,350,916
| 7,698
|
NotImplementedError when using streaming=True in Google Colab environment
|
open
| 2025-07-23T08:04:53
| 2025-07-23T15:06:23
| null |
https://github.com/huggingface/datasets/issues/7698
| null |
Aniket17200
| false
|
[
"Hi, @Aniket17200, try upgrading datasets using '!pip install -U datasets'. I hope this will resolve your issue.",
"Thank you @tanuj-rai, it's working great "
] |
3,254,526,399
| 7,697
|
-
|
closed
| 2025-07-23T01:30:32
| 2025-07-25T15:21:39
| 2025-07-25T15:21:39
|
https://github.com/huggingface/datasets/issues/7697
| null |
ghost
| false
|
[] |
3,253,433,350
| 7,696
|
load_dataset() in 4.0.0 returns different audio samples compared to earlier versions breaking reproducibility
|
closed
| 2025-07-22T17:02:17
| 2025-07-30T14:22:21
| 2025-07-30T14:22:21
|
https://github.com/huggingface/datasets/issues/7696
| null |
Manalelaidouni
| false
|
[
"Hi ! This is because `datasets` now uses the FFmpeg-based library `torchcodec` instead of the libsndfile-based library `soundfile` to decode audio data. Those two have different decoding implementations",
"I’m all for torchcodec, good luck with the migration!"
] |
3,251,904,843
| 7,695
|
Support downloading specific splits in load_dataset
|
closed
| 2025-07-22T09:33:54
| 2025-07-28T17:33:30
| 2025-07-28T17:15:45
|
https://github.com/huggingface/datasets/pull/7695
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7695",
"html_url": "https://github.com/huggingface/datasets/pull/7695",
"diff_url": "https://github.com/huggingface/datasets/pull/7695.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7695.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[
"I’ve completed the following steps to continue the partial split download support (from PR #6832):\r\n\r\nI did changes on top of what has been done by mario. Here are some of those changes: \r\n- Restored support for writing multiple split shards:\r\n\r\n- In _prepare_split_single, we now correctly replace JJJJJ and SSSSS placeholders in the fpath for job/shard IDs before creating the writer.\r\n\r\n- Added os.makedirs(os.path.dirname(path), exist_ok=True) after placeholder substitution to prevent FileNotFoundError.\r\n\r\n- Applied the fix to both split writers:\r\n\r\n 1] self._generate_examples version (used by most modules).\r\n\r\n 2] self._generate_tables version (used by IterableDatasetBuilder).\r\n\r\n- Confirmed 109/113 tests passing, meaning the general logic is working across the board.\r\n\r\nWhat’s still failing\r\n4 integration tests fail:\r\n\r\n`test_load_hub_dataset_with_single_config_in_metadata`\r\n\r\n`test_load_hub_dataset_with_two_config_in_metadata`\r\n\r\n`test_load_hub_dataset_with_metadata_config_in_parallel`\r\n\r\n`test_reload_old_cache_from_2_15`\r\n\r\nAll are due to FileNotFoundError from uncreated output paths, which I'm currently finalizing by ensuring os.makedirs() is correctly applied before every writer instantiation.\r\n\r\nI will update about these fixes after running tests!",
"@lhoestq this was just an update",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7695). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Local DIR wasn't doing well, dk actually what happened, will PR again! Sorry :)"
] |
3,247,600,408
| 7,694
|
Dataset.to_json consumes excessive memory, appears to not be a streaming operation
|
open
| 2025-07-21T07:51:25
| 2025-07-25T14:42:21
| null |
https://github.com/huggingface/datasets/issues/7694
| null |
ycq0125
| false
|
[
"Hi ! to_json is memory efficient and writes the data by batch:\n\nhttps://github.com/huggingface/datasets/blob/d9861d86be222884dabbd534a2db770c70c9b558/src/datasets/io/json.py#L153-L159\n\nWhat memory are you mesuring ? If you are mesuring RSS, it is likely that it counts the memory mapped data of the dataset. Memory mapped data are loaded as physical memory when accessed and are automatically discarded when your OS needs more memory, and therefore doesn't OOM."
] |
3,246,369,678
| 7,693
|
Dataset scripts are no longer supported, but found superb.py
|
open
| 2025-07-20T13:48:06
| 2025-08-20T16:26:23
| null |
https://github.com/huggingface/datasets/issues/7693
| null |
edwinzajac
| false
|
[
"I got a pretty similar issue when I try to load bigbio/neurotrial_ner dataset. \n`Dataset scripts are no longer supported, but found neurotrial_ner.py`",
"Same here. I was running this tutorial and got a similar error: https://github.com/openai/whisper/discussions/654 (I'm a first-time transformers library user)\n\nRuntimeError: Dataset scripts are no longer supported, but found librispeech_asr.py\n\nWhat am I supposed to do at this point?\n\nThanks",
"hey I got the same error and I have tried to downgrade version to 3.6.0 and it works.\n`pip install datasets==3.6.0`",
"Thank you very much @Tin-viAct . That indeed did the trick for me :) \nNow the code continue its normal flow ",
"Thanks @Tin-viAct, Works!",
"I converted [openslr/librispeech_asr](https://huggingface.co/datasets/openslr/librispeech_asr) to Parquet - thanks for reporting.\n\nIt's now compatible with `datasets` 4.0 !\n\nI'll try to ping the authors of the other datasets like [s3prl/superb](https://huggingface.co/datasets/s3prl/superb) and [espnet/yodas2](https://huggingface.co/datasets/espnet/yodas2)",
"How come a breaking change was allowed and now requires extra work from individual authors for things to be usable? \n\nhttps://en.wikipedia.org/wiki/Backward_compatibility",
"We follow semantic versioning so that breaking changes only occur in major releases. Also note that dataset scripts have been legacy for some time now, with a message on the dataset pages to ask authors to update their datasets.\n\nIt's ok to ping older versions of `datasets`, but imo a few remaining datasets need to be converted since they are valuable to the community.",
"I was facing the same issue with a not so familiar dataset in hugging hub . downgrading the datasets version worked ❤️. Thank you @Tin-viAct .",
"Thank you so much, @Tin-viAct ! I’ve been struggling with this issue for about 3 hours, and your suggestion to downgrade datasets worked perfectly. I really appreciate the help—you saved me!",
"> hey I got the same error and I have tried to downgrade version to 3.6.0 and it works. `pip install datasets==3.6.0`\n\nThank you so much! I was following the [quickstart](https://huggingface.co/docs/datasets/quickstart) and the very first sample fails. Not a good way to get started....",
"> hey I got the same error and I have tried to downgrade version to 3.6.0 and it works. `pip install datasets==3.6.0`\nthank you! I get it.\n",
"I updated `hotpot_qa` and pinged the PolyAI folks to update the dataset used in the quickstart as well: https://huggingface.co/datasets/PolyAI/minds14/discussions/35\nedit: merged !",
"[LegalBench](https://huggingface.co/datasets/nguha/legalbench) is downloaded 10k times a month and is now broken. Would be great to have this fixed.",
"I opened a PR to convert LegalBench to Parquet and reached out to the author: https://huggingface.co/datasets/nguha/legalbench/discussions/34",
"Thank you very much @Tin-viAct! I’d been looking everywhere for a fix, and your reply saved me :)"
] |
3,246,268,635
| 7,692
|
xopen: invalid start byte for streaming dataset with trust_remote_code=True
|
open
| 2025-07-20T11:08:20
| 2025-07-25T14:38:54
| null |
https://github.com/huggingface/datasets/issues/7692
| null |
sedol1339
| false
|
[
"Hi ! it would be cool to convert this dataset to Parquet. This will make it work for `datasets>=4.0`, enable the Dataset Viewer and make it more reliable to load/stream (currently it uses a loading script in python and those are known for having issues sometimes)\n\nusing `datasets==3.6.0`, here is the command to convert it and open a Pull Request:\n\n```\ndatasets-cli convert_to_parquet espnet/yodas2 --trust_remote_code\n```\n\nThough it's likely that the `UnicodeDecodeError` comes from the loading script. If the script has a bug, it must be fixed to be able to convert the dataset without errors"
] |
3,245,547,170
| 7,691
|
Large WebDataset: pyarrow.lib.ArrowCapacityError on load() even with streaming
|
open
| 2025-07-19T18:40:27
| 2025-07-25T08:51:10
| null |
https://github.com/huggingface/datasets/issues/7691
| null |
cleong110
| false
|
[
"It seems the error occurs right here, as it tries to infer the Features: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/webdataset/webdataset.py#L78-L90",
"It seems to me that if we have something that is so large that it cannot fit in pa.table, the fallback method should be to just set it as \"binary\" type, perhaps?",
"I also tried creating a dataset_info.json but the webdataset builder didn't seem to look for it and load it",
"Workaround on my end, removed all videos larger than 2GB for now. The dataset no longer crashes.",
"Potential patch to webdataset.py could be like so: \n```python\nLARGE_THRESHOLD = 2 * 1024 * 1024 * 1024 # 2 GB\nlarge_fields = set()\n\n# Replace large binary fields with None for schema inference\nprocessed_examples = []\nfor example in first_examples:\n new_example = {}\n for k, v in example.items():\n if isinstance(v, bytes) and len(v) > LARGE_THRESHOLD:\n large_fields.add(k)\n new_example[k] = None # Replace with None to avoid Arrow errors\n else:\n new_example[k] = v\n processed_examples.append(new_example)\n\n# Proceed to infer schema\npa_tables = [\n pa.Table.from_pylist(cast_to_python_objects([example], only_1d_for_numpy=True))\n for example in processed_examples\n]\ninferred_arrow_schema = pa.concat_tables(pa_tables, promote_options=\"default\").schema\n\n# Patch features to reflect large_binary\nfeatures = datasets.Features.from_arrow_schema(inferred_arrow_schema)\nfor field in large_fields:\n features[field] = datasets.Value(\"large_binary\")\n\n```"
] |
3,244,380,691
| 7,690
|
HDF5 support
|
closed
| 2025-07-18T21:09:41
| 2025-08-19T15:18:58
| 2025-08-19T13:28:53
|
https://github.com/huggingface/datasets/pull/7690
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7690",
"html_url": "https://github.com/huggingface/datasets/pull/7690",
"diff_url": "https://github.com/huggingface/datasets/pull/7690.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7690.patch",
"merged_at": "2025-08-19T13:28:53"
}
|
klamike
| true
|
[
"A few to-dos which I think can be left for future PRs (which I am happy to do/help with -- just this one is already huge 😄 ):\r\n- [Enum types](https://docs.h5py.org/en/stable/special.html#enumerated-types)\r\n- HDF5 [io](https://github.com/huggingface/datasets/tree/main/src/datasets/io)\r\n- [dataset-viewer](https://github.com/huggingface/dataset-viewer) support (not sure if changes are needed with the way it is written now)",
"@lhoestq any interest in merging this? Let me know if I can do anything to make reviewing it easier!",
"Sorry for the delay, I'll review your PR soon :)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7690). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks for the review @lhoestq! Rebased on main and incorporated most of your suggestions.\r\n\r\nI believe the only one left is the zero-dim handling with `table_cast`...",
"@lhoestq is 2c4bfba what you meant?",
"Awesome! Yes, I'm happy to help with the docs. Would appreciate any pointers, we can discuss in #7740.\r\n\r\nIt does look like there was a CI test failure, though it seems unrelated?\r\n```\r\nFAILED tests/test_dataset_dict.py::test_dummy_datasetdict_serialize_fs - ValueError: Protocol not known: mock\r\nFAILED tests/test_arrow_dataset.py::test_dummy_dataset_serialize_fs - ValueError: Protocol not known: mock\r\n```\r\nAlso, what do you think of the todos in https://github.com/huggingface/datasets/pull/7690#issuecomment-3105391677 ? In particular I think support in dataset-viewer would be nice.",
"Cool ! Yeah the failure is unrelated\r\n\r\nRegarding the Viewer, it should work out of the box when it's updated with the next version of `datasets` :)"
] |
3,242,580,301
| 7,689
|
BadRequestError for loading dataset?
|
closed
| 2025-07-18T09:30:04
| 2025-07-18T11:59:51
| 2025-07-18T11:52:29
|
https://github.com/huggingface/datasets/issues/7689
| null |
WPoelman
| false
|
[
"Same here, for `HuggingFaceFW/fineweb`. Code that worked with no issues for the last 2 months suddenly fails today. Tried updating `datasets`, `huggingface_hub`, `fsspec` to newest versions, but the same error occurs.",
"I'm also hitting this issue, with `mandarjoshi/trivia_qa`; My dataset loading was working successfully yesterday - I'm using `huggingface-hub==0.27.1`, `datasets==3.2.0`",
"Same, here with `datasets==3.6.0`",
"Same, with `datasets==4.0.0`.",
"Same here tried different versions of huggingface-hub and datasets but the error keeps occuring ",
"A temporary workaround is to first download your dataset with\n\nhuggingface-cli download HuggingFaceH4/ultrachat_200k --repo-type dataset\n\nThen find the local path of the dataset typically like ~/.cache/huggingface/hub/HuggingFaceH4-ultrachat_200k/snapshots/*id*\n\nAnd then load like \n\nfrom datasets import load_dataset\ndataset = load_dataset(\"~/.cache/huggingface/hub/HuggingFaceH4-ultrachat_200k/snapshots/*id*\")\n",
"I am also experiencing this issue. I was trying to load TinyStories\nds = datasets.load_dataset(\"roneneldan/TinyStories\", streaming=True, split=\"train\")\n\nresulting in the previously stated error:\nException has occurred: BadRequestError\n(Request ID: Root=1-687a1d09-66cceb496c9401b1084133d6;3550deed-c459-4799-bc74-97924742bd94)\n\nBad request:\n* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand\n✖ Invalid input: expected array, received string\n → at paths\n✖ Invalid input: expected boolean, received string\n → at expand\nFileNotFoundError: Dataset roneneldan/TinyStories is not cached in None\n\nThis very code worked fine yesterday, so it's a very recent issue.\n\nEnvironment info:\nprint(\"datasets version:\", datasets.__version__)\nprint(\"huggingface_hub version:\", huggingface_hub.__version__)\nprint(\"pyarrow version:\", pyarrow.__version__)\nprint(\"pandas version:\", pandas.__version__)\nprint(\"fsspec version:\", fsspec.__version__)\nprint(\"Python version:\", sys.version)\nprint(\"Platform:\", platform.platform())\ndatasets version: 4.0.0\nhuggingface_hub version: 0.33.4\npyarrow version: 19.0.0\npandas version: 2.2.3\nfsspec version: 2024.9.0\nPython version: 3.12.11 (main, Jun 10 2025, 11:55:20) [GCC 15.1.1 20250425]\nPlatform: Linux-6.15.6-arch1-1-x86_64-with-glibc2.41",
"Same here with datasets==3.6.0\n```\nhuggingface_hub.errors.BadRequestError: (Request ID: Root=1-687a238d-27374f964534f79f702bc239;61f0669c-cb70-4aff-b57b-73a446f9c65e)\n\nBad request:\n* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand\n✖ Invalid input: expected array, received string\n → at paths\n✖ Invalid input: expected boolean, received string\n → at expand\n```",
"Same here, works perfectly yesterday\n\n```\nError code: ConfigNamesError\nException: BadRequestError\nMessage: (Request ID: Root=1-687a23a5-314b45b36ce962cf0e431b9a;b979ddb2-a80b-483c-8b1e-403e24e83127)\n\nBad request:\n* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand\n✖ Invalid input: expected array, received string\n → at paths\n✖ Invalid input: expected boolean, received string\n → at expand\n```",
"It was literally working for me and then suddenly it stopped working next time I run the command. Same issue but private repo so I can't share example. ",
"A bug from Hugging Face not us",
"Same here!",
"@LMSPaul thanks! The workaround seems to work (at least for the datasets I tested).\n\nOn the command line:\n```sh\nhuggingface-cli download <dataset-name> --repo-type dataset --local-dir <local-dir>\n```\n\nAnd then in Python:\n```python\nfrom datasets import load_dataset\n\n# The dataset-specific options seem to work with this as well, \n# except for a warning from \"trust_remote_code\"\nds = load_dataset(<local-dir>)\n```",
"Same for me.. I couldn't load ..\nIt was perfectly working yesterday..\n\n\nfrom datasets import load_dataset\nraw_datasets = load_dataset(\"glue\", \"mrpc\")\n\nThe error resulting is given below\n\n---------------------------------------------------------------------------\nBadRequestError Traceback (most recent call last)\n/tmp/ipykernel_60/772458687.py in <cell line: 0>()\n 1 from datasets import load_dataset\n----> 2 raw_datasets = load_dataset(\"glue\", \"mrpc\")\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)\n 2060 \n 2061 # Create a dataset builder\n-> 2062 builder_instance = load_dataset_builder(\n 2063 path=path,\n 2064 name=name,\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)\n 1780 download_config = download_config.copy() if download_config else DownloadConfig()\n 1781 download_config.storage_options.update(storage_options)\n-> 1782 dataset_module = dataset_module_factory(\n 1783 path,\n 1784 revision=revision,\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)\n 1662 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\n 1663 ) from None\n-> 1664 raise e1 from None\n 1665 elif trust_remote_code:\n 1666 raise FileNotFoundError(\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)\n 1627 download_mode=download_mode,\n 1628 use_exported_dataset_infos=use_exported_dataset_infos,\n-> 1629 ).get_module()\n 1630 except GatedRepoError as e:\n 1631 message = f\"Dataset '{path}' is a gated dataset on the Hub.\"\n\n/usr/local/lib/python3.11/dist-packages/datasets/load.py in get_module(self)\n 1017 else:\n 1018 patterns = get_data_patterns(base_path, download_config=self.download_config)\n-> 1019 data_files = DataFilesDict.from_patterns(\n 1020 patterns,\n 1021 base_path=base_path,\n\n/usr/local/lib/python3.11/dist-packages/datasets/data_files.py in from_patterns(cls, patterns, base_path, allowed_extensions, download_config)\n 687 patterns_for_key\n 688 if isinstance(patterns_for_key, DataFilesList)\n--> 689 else DataFilesList.from_patterns(\n 690 patterns_for_key,\n 691 base_path=base_path,\n\n/usr/local/lib/python3.11/dist-packages/datasets/data_files.py in from_patterns(cls, patterns, base_path, allowed_extensions, download_config)\n 580 try:\n 581 data_files.extend(\n--> 582 resolve_pattern(\n 583 pattern,\n 584 base_path=base_path,\n\n/usr/local/lib/python3.11/dist-packages/datasets/data_files.py in resolve_pattern(pattern, base_path, allowed_extensions, download_config)\n 358 matched_paths = [\n 359 filepath if filepath.startswith(protocol_prefix) else protocol_prefix + filepath\n--> 360 for filepath, info in fs.glob(pattern, detail=True, **glob_kwargs).items()\n 361 if (info[\"type\"] == \"file\" or (info.get(\"islink\") and os.path.isfile(os.path.realpath(filepath))))\n 362 and (xbasename(filepath) not in files_to_ignore)\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/hf_file_system.py in glob(self, path, **kwargs)\n 519 kwargs = {\"expand_info\": kwargs.get(\"detail\", False), **kwargs}\n 520 path = self.resolve_path(path, revision=kwargs.get(\"revision\")).unresolve()\n--> 521 return super().glob(path, **kwargs)\n 522 \n 523 def find(\n\n/usr/local/lib/python3.11/dist-packages/fsspec/spec.py in glob(self, path, maxdepth, **kwargs)\n 635 # any exception allowed bar FileNotFoundError?\n 636 return False\n--> 637 \n 638 def lexists(self, path, **kwargs):\n 639 \"\"\"If there is a file at the given path (including\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/hf_file_system.py in find(self, path, maxdepth, withdirs, detail, refresh, revision, **kwargs)\n 554 \"\"\"\n 555 if maxdepth:\n--> 556 return super().find(\n 557 path, maxdepth=maxdepth, withdirs=withdirs, detail=detail, refresh=refresh, revision=revision, **kwargs\n 558 )\n\n/usr/local/lib/python3.11/dist-packages/fsspec/spec.py in find(self, path, maxdepth, withdirs, detail, **kwargs)\n 498 # This is needed for posix glob compliance\n 499 if withdirs and path != \"\" and self.isdir(path):\n--> 500 out[path] = self.info(path)\n 501 \n 502 for _, dirs, files in self.walk(path, maxdepth, detail=True, **kwargs):\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/hf_file_system.py in info(self, path, refresh, revision, **kwargs)\n 717 out = out1[0]\n 718 if refresh or out is None or (expand_info and out and out[\"last_commit\"] is None):\n--> 719 paths_info = self._api.get_paths_info(\n 720 resolved_path.repo_id,\n 721 resolved_path.path_in_repo,\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_validators.py in _inner_fn(*args, **kwargs)\n 112 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)\n 113 \n--> 114 return fn(*args, **kwargs)\n 115 \n 116 return _inner_fn # type: ignore\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/hf_api.py in get_paths_info(self, repo_id, paths, expand, revision, repo_type, token)\n 3397 headers=headers,\n 3398 )\n-> 3399 hf_raise_for_status(response)\n 3400 paths_info = response.json()\n 3401 return [\n\n/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_http.py in hf_raise_for_status(response, endpoint_name)\n 463 f\"\\n\\nBad request for {endpoint_name} endpoint:\" if endpoint_name is not None else \"\\n\\nBad request:\"\n 464 )\n--> 465 raise _format(BadRequestError, message, response) from e\n 466 \n 467 elif response.status_code == 403:\n\nBadRequestError: (Request ID: Root=1-687a3201-087954b9245ab59672e6068e;d5bb4dbe-03e1-4912-bcec-5964c017b920)\n\nBad request:\n* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand\n✖ Invalid input: expected array, received string\n → at paths\n✖ Invalid input: expected boolean, re",
"Thanks for the report!\nThe issue has been fixed and should now work without any code changes 😄\nSorry for the inconvenience!\n\nClosing, please open again if needed.",
"Works for me. Thanks!\n",
"Yes Now it's works for me..Thanks\r\n\r\nOn Fri, 18 Jul 2025, 5:25 pm Karol Brejna, ***@***.***> wrote:\r\n\r\n> *karol-brejna-i* left a comment (huggingface/datasets#7689)\r\n> <https://github.com/huggingface/datasets/issues/7689#issuecomment-3089238320>\r\n>\r\n> Works for me. Thanks!\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/7689#issuecomment-3089238320>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AJRBXNEWBJ5UYVC2IRJM5DD3JDODZAVCNFSM6AAAAACB2FDG4GVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTAOBZGIZTQMZSGA>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n"
] |
3,238,851,443
| 7,688
|
No module named "distributed"
|
open
| 2025-07-17T09:32:35
| 2025-07-25T15:14:19
| null |
https://github.com/huggingface/datasets/issues/7688
| null |
yingtongxiong
| false
|
[
"The error ModuleNotFoundError: No module named 'datasets.distributed' means your installed datasets library is too old or incompatible with the version of Library you are using(in my case it was BEIR). The datasets.distributed module was removed in recent versions of the datasets library.\n\nDowngrade datasets to version 2.14.6 : ! pip install datasets==2.14.6\n",
"this code does run in `datasets` 4.0:\n```python\nfrom datasets.distributed import split_dataset_by_node\n```\n\nmake sure you have a python version that is recent enough (>=3.9) to be able to install `datasets` 4.0",
"I do think the problem is caused by the python version, because I do have python version 3.12.5"
] |
3,238,760,301
| 7,687
|
Datasets keeps rebuilding the dataset every time i call the python script
|
open
| 2025-07-17T09:03:38
| 2025-07-25T15:21:31
| null |
https://github.com/huggingface/datasets/issues/7687
| null |
CALEB789
| false
|
[
"here is the code to load the dataset form the cache:\n\n```python\ns = load_dataset('databricks/databricks-dolly-15k')['train']\n```\n\nif you pass the location of a local directory it will create a new cache based on that directory content"
] |
3,237,201,090
| 7,686
|
load_dataset does not check .no_exist files in the hub cache
|
open
| 2025-07-16T20:04:00
| 2025-07-16T20:04:00
| null |
https://github.com/huggingface/datasets/issues/7686
| null |
jmaccarl
| false
|
[] |
3,236,979,340
| 7,685
|
Inconsistent range request behavior for parquet REST api
|
open
| 2025-07-16T18:39:44
| 2025-08-11T08:16:54
| null |
https://github.com/huggingface/datasets/issues/7685
| null |
universalmind303
| false
|
[
"This is a weird bug, is it a range that is supposed to be satisfiable ? I mean, is it on the boundraries ?\n\nLet me know if you'r e still having the issue, in case it was just a transient bug",
"@lhoestq yes the ranges are supposed to be satisfiable, and _sometimes_ they are. \n\nThe head requests show that it does in fact accept a byte range. \n\n```\n> curl -IL \"https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet\" \n\n\nHTTP/2 200\ncontent-length: 218006142\ncontent-disposition: inline; filename*=UTF-8''0000.parquet; filename=\"0000.parquet\";\ncache-control: public, max-age=31536000\netag: \"cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9\"\naccess-control-allow-origin: *\naccess-control-allow-headers: Content-Range, Content-Type, Content-Disposition, ETag\naccess-control-expose-headers: Accept-Ranges, Content-Range, Content-Type, Content-Disposition, ETag, X-Cache\naccept-ranges: bytes\nx-request-id: 01K11493PRMCZKVSNCBF1EX1WJ\ndate: Fri, 25 Jul 2025 15:47:25 GMT\nx-cache: Hit from cloudfront\nvia: 1.1 ad637ff39738449b56ab4eac4b02cbf4.cloudfront.net (CloudFront)\nx-amz-cf-pop: MSP50-P2\nx-amz-cf-id: ti1Ze3e0knGMl0PkeZ_F_snZNZe4007D9uT502MkGjM4NWPYWy13wA==\nage: 15\ncontent-security-policy: default-src 'none'; sandbox\n```\n\nand as I mentioned, _sometimes_ it satisfies the request \n\n```\n* Request completely sent off\n< HTTP/2 206\n< content-length: 131072\n< content-disposition: inline; filename*=UTF-8''0000.parquet; filename=\"0000.parquet\";\n< cache-control: public, max-age=31536000\n< etag: \"cf8a3a5665cf8b2ff667fb5236a1e5cb13c7582955f9533c88e1387997ef3af9\"\n< access-control-allow-origin: *\n< access-control-allow-headers: Content-Range, Content-Type, Content-Disposition, ETag\n< access-control-expose-headers: Accept-Ranges, Content-Range, Content-Type, Content-Disposition, ETag, X-Cache\n< x-request-id: 01K1146P5PNC4D2XD348C78BTC\n< date: Fri, 25 Jul 2025 15:46:06 GMT\n< x-cache: Hit from cloudfront\n< via: 1.1 990606ab91bf6503d073ad5fee40784c.cloudfront.net (CloudFront)\n< x-amz-cf-pop: MSP50-P2\n< x-amz-cf-id: l58ghqEzNZn4eo4IRNl76fOFrHTk_TJKeLi0-g8YYHmq7Oh3s8sXnQ==\n< age: 248\n< content-security-policy: default-src 'none'; sandbox\n< content-range: bytes 217875070-218006141/218006142\n```\n\nbut more often than not, it returns a 416\n```\n* Request completely sent off\n< HTTP/2 416\n< content-type: text/html\n< content-length: 49\n< server: CloudFront\n< date: Fri, 25 Jul 2025 15:51:08 GMT\n< expires: Fri, 25 Jul 2025 15:51:08 GMT\n< content-range: bytes */177\n< x-cache: Error from cloudfront\n< via: 1.1 65ba38c8dc30018660c405d1f32ef3a0.cloudfront.net (CloudFront)\n< x-amz-cf-pop: MSP50-P1\n< x-amz-cf-id: 1t1Att_eqiO-LmlnnaO-cCPoh6G2AIQDaklhS08F_revXNqijMpseA==\n```\n\n\n",
"As a workaround, adding a unique parameter to the url avoids the CDN caching and returns the correct result. \n\n```\n❯ curl -v -L -H \"Range: bytes=217875070-218006142\" -o output.parquet \"https://huggingface.co/api/datasets/HuggingFaceTB/smoltalk2/parquet/Mid/Llama_Nemotron_Post_Training_Dataset_reasoning_r1/0.parquet?cachebust=<SOMEUNIQUESTRING>\" \n``` \n",
"@lhoestq Is there any update on this? We (daft) have been getting more reports of this when users are reading huggingface datasets. ",
"> [@lhoestq](https://github.com/lhoestq) Is there any update on this? We (daft) have been getting more reports of this when users are reading huggingface datasets.\n\nHello, \nWe have temporarily disabled the caching rule that could be the origin of this issue. Meanwhile, the problem is still being investigated by us"
] |
3,231,680,474
| 7,684
|
fix audio cast storage from array + sampling_rate
|
closed
| 2025-07-15T10:13:42
| 2025-07-15T10:24:08
| 2025-07-15T10:24:07
|
https://github.com/huggingface/datasets/pull/7684
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7684",
"html_url": "https://github.com/huggingface/datasets/pull/7684",
"diff_url": "https://github.com/huggingface/datasets/pull/7684.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7684.patch",
"merged_at": "2025-07-15T10:24:07"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7684). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,231,553,161
| 7,683
|
Convert to string when needed + faster .zstd
|
closed
| 2025-07-15T09:37:44
| 2025-07-15T10:13:58
| 2025-07-15T10:13:56
|
https://github.com/huggingface/datasets/pull/7683
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7683",
"html_url": "https://github.com/huggingface/datasets/pull/7683",
"diff_url": "https://github.com/huggingface/datasets/pull/7683.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7683.patch",
"merged_at": "2025-07-15T10:13:56"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7683). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,229,687,253
| 7,682
|
Fail to cast Audio feature for numpy arrays in datasets 4.0.0
|
closed
| 2025-07-14T18:41:02
| 2025-07-15T12:10:39
| 2025-07-15T10:24:08
|
https://github.com/huggingface/datasets/issues/7682
| null |
luatil-cloud
| false
|
[
"thanks for reporting, I opened a PR and I'll make a patch release soon ",
"> thanks for reporting, I opened a PR and I'll make a patch release soon\n\nThank you very much @lhoestq!"
] |
3,227,112,736
| 7,681
|
Probabilistic High Memory Usage and Freeze on Python 3.10
|
open
| 2025-07-14T01:57:16
| 2025-07-14T01:57:16
| null |
https://github.com/huggingface/datasets/issues/7681
| null |
ryan-minato
| false
|
[] |
3,224,824,151
| 7,680
|
Question about iterable dataset and streaming
|
open
| 2025-07-12T04:48:30
| 2025-08-01T13:01:48
| null |
https://github.com/huggingface/datasets/issues/7680
| null |
Tavish9
| false
|
[
"> If we have already loaded the dataset, why doing to_iterable_dataset? Does it go through the dataset faster than map-style dataset?\n\nyes, it makes a faster DataLoader for example (otherwise DataLoader uses `__getitem__` which is slower than iterating)\n\n> load_dataset(streaming=True) is useful for huge dataset, but the speed is slow. How to make it comparable to to_iterable_dataset without loading the whole dataset into RAM?\n\nYou can aim for saturating your bandwidth using a DataLoader with num_workers and prefetch_factor. The maximum speed will be your internet bandwidth (unless your CPU is a bottlenbeck for CPU operations like image decoding).",
"> > If we have already loaded the dataset, why doing to_iterable_dataset? Does it go through the dataset faster than map-style dataset?\n> \n> yes, it makes a faster DataLoader for example (otherwise DataLoader uses `__getitem__` which is slower than iterating)\n\nOkay, but `__getitem__` seems suitable for distributed settings. A distributed sampler would dispatch distinct indexes to each rank (rank0 got 0,1,2,3, rank1 got 4,5,6,7), however, if we make it `to_iterable_dataset`, then each rank needs to iterate all the samples, making it slower (i,e, rank1 got 0,1,2,3, rank2 got 0,1,2,3,(4,5,6,7))\n\nWhat's your opinion here?",
"> however, if we make it to_iterable_dataset, then each rank needs to iterate all the samples, making it slower (i,e, rank1 got 0,1,2,3, rank2 got 0,1,2,3,(4,5,6,7))\n\nActually if you specify `to_iterable_dataset(num_shards=world_size)` (or a factor of world_size) and use a `torch.utils.data.DataLoader` then each rank will get a subset of the data thanks to the sharding. E.g. rank0 gets 0,1,2,3 and rank1 gets 4,5,6,7.\n\nThis is because `datasets.IterableDataset` subclasses `torch.utils.data.IterableDataset` and is aware of the current rank.",
"Got it, very nice features `num_shards` 👍🏻 \n\nI would benchmark `to_iterable_dataset(num_shards=world_size)` against traditional map-style one in distributed settings in the near future.",
"Hi @lhoestq , I run a test for the speed in single node. Things are not expected as you mentioned before.\n\n```python\nimport time\n\nimport datasets\nfrom torch.utils.data import DataLoader\n\n\ndef time_decorator(func):\n def wrapper(*args, **kwargs):\n start_time = time.time()\n result = func(*args, **kwargs)\n end_time = time.time()\n print(f\"Time taken: {end_time - start_time} seconds\")\n return result\n\n return wrapper\n\n\ndataset = datasets.load_dataset(\n \"parquet\", data_dir=\"my_dir\", split=\"train\"\n)\n\n\n@time_decorator\ndef load_dataset1():\n for _ in dataset:\n pass\n\n\n@time_decorator\ndef load_dataloader1():\n for _ in DataLoader(dataset, batch_size=100, num_workers=5):\n pass\n\n\n@time_decorator\ndef load_dataset2():\n for _ in dataset.to_iterable_dataset():\n pass\n\n\n@time_decorator\ndef load_dataloader2():\n for _ in DataLoader(dataset.to_iterable_dataset(num_shards=5), batch_size=100, num_workers=5):\n pass\n\n\nload_dataset1()\nload_dataloader1()\nload_dataset2()\nload_dataloader2()\n```\n```bash\nResolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 53192/53192 [00:00<00:00, 227103.16it/s]\nTime taken: 100.36162948608398 seconds\nTime taken: 70.09702134132385 seconds\nTime taken: 343.09229612350464 seconds\nTime taken: 132.8996012210846 seconds\n```\n\n1. Why `for _ in dataset.to_iterable_dataset()` is much slower than `for _ in dataset`\n2. The `70 < 132`, the dataloader is slower when `to_iterable_dataset`",
"Loading in batches is faster than one example at a time. In your test the dataset is loaded in batches while the iterable_dataset is loaded one example at a time and the dataloader has a buffer to turn the examples to batches.\n\ncan you try this ?\n\n```\nbatched_dataset = dataset.batch(100, num_proc=5)\n\n@time_decorator\ndef load_dataloader3():\n for _ in DataLoader(batched_dataset.to_iterable_dataset(num_shards=5), batch_size=None, num_workers=5):\n pass\n```",
"To be fair, I test the time including batching:\n```python\n@time_decorator\ndef load_dataloader3():\n for _ in DataLoader(dataset.batch(100, num_proc=5).to_iterable_dataset(num_shards=5), batch_size=None, num_workers=5):\n pass\n```\n\n```bash\nTime taken: 49.722447633743286 seconds\n```",
"I run another test about shuffling.\n\n```python\n@time_decorator\ndef load_map_dataloader1():\n for _ in DataLoader(dataset, batch_size=100, num_workers=5, shuffle=True):\n pass\n\n@time_decorator\ndef load_map_dataloader2():\n for _ in DataLoader(dataset.batch(100, num_proc=5), batch_size=None, num_workers=5, shuffle=True):\n pass\n\n\n@time_decorator\ndef load_iter_dataloader1():\n for _ in DataLoader(dataset.batch(100, num_proc=5).to_iterable_dataset(num_shards=5).shuffle(buffer_size=1000), batch_size=None, num_workers=5):\n pass\n\nload_map_dataloader1()\nload_map_dataloader2()\nload_iter_dataloader1()\n```\n\n```bash\nTime taken: 43.8506863117218 seconds\nTime taken: 38.02591300010681 seconds\nTime taken: 53.38815689086914 seconds\n```\n\n\n- What if I have custom collate_fn when batching?\n\n- And if I want to shuffle the dataset, what's the correct order for `to_iterable_dataset(num_shards=x)`, `batch()` and `shuffle()`. Is `dataset.batch().to_iterable_dataset().shuffle()`? This is not faster than map-style dataset"
] |
3,220,787,371
| 7,679
|
metric glue breaks with 4.0.0
|
closed
| 2025-07-10T21:39:50
| 2025-07-11T17:42:01
| 2025-07-11T17:42:01
|
https://github.com/huggingface/datasets/issues/7679
| null |
stas00
| false
|
[
"I released `evaluate` 0.4.5 yesterday to fix the issue - sorry for the inconvenience:\n\n```\npip install -U evaluate\n```",
"Thanks so much, @lhoestq!"
] |
3,218,625,544
| 7,678
|
To support decoding audio data, please install 'torchcodec'.
|
closed
| 2025-07-10T09:43:13
| 2025-07-22T03:46:52
| 2025-07-11T05:05:42
|
https://github.com/huggingface/datasets/issues/7678
| null |
alpcansoydas
| false
|
[
"Hi ! yes you should `!pip install -U datasets[audio]` to have the required dependencies.\n\n`datasets` 4.0 now relies on `torchcodec` for audio decoding. The `torchcodec` AudioDecoder enables streaming from HF and also allows to decode ranges of audio",
"Same issues on Colab.\n\n> !pip install -U datasets[audio] \n\nThis works for me. Thanks."
] |
3,218,044,656
| 7,677
|
Toxicity fails with datasets 4.0.0
|
closed
| 2025-07-10T06:15:22
| 2025-07-11T04:40:59
| 2025-07-11T04:40:59
|
https://github.com/huggingface/datasets/issues/7677
| null |
serena-ruan
| false
|
[
"Hi ! You can fix this by upgrading `evaluate`:\n\n```\npip install -U evaluate\n```",
"Thanks, verified evaluate 0.4.5 works!"
] |
3,216,857,559
| 7,676
|
Many things broken since the new 4.0.0 release
|
open
| 2025-07-09T18:59:50
| 2025-07-21T10:38:01
| null |
https://github.com/huggingface/datasets/issues/7676
| null |
mobicham
| false
|
[
"Happy to take a look, do you have a list of impacted datasets ?",
"Thanks @lhoestq , related to lm-eval, at least `winogrande`, `mmlu` and `hellaswag`, based on my tests yesterday. But many others like <a href=\"https://huggingface.co/datasets/lukaemon/bbh\">bbh</a>, most probably others too. ",
"Hi @mobicham ,\n\nI was having the same issue `ValueError: Feature type 'List' not found` yesterday, when I tried to load my dataset using the `load_dataset()` function.\nBy updating to `4.0.0`, I don't see this error anymore.\n\np.s. I used `Sequence` in replace of list when building my dataset (see below)\n```\nfeatures = Features({\n ...\n \"objects\": Sequence({\n \"id\": Value(\"int64\"),\n \"bbox\": Sequence(Value(\"float32\"), length=4),\n \"category\": Value(\"string\")\n }),\n ...\n})\ndataset = Dataset.from_dict(data_dict)\ndataset = dataset.cast(features)\n\n``` \n",
"The issue comes from [hails/mmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train), [allenai/winogrande](https://huggingface.co/datasets/allenai/winogrande), [lukaemon/bbh](https://huggingface.co/datasets/lukaemon/bbh) and [Rowan/hellaswag](https://huggingface.co/datasets/Rowan/hellaswag) which are all unsupported in `datasets` 4.0 since they are based on python scripts. Fortunately there are PRs to fix those datasets (I did some of them a year ago but dataset authors haven't merged yet... will have to ping people again about it and update here):\n\n- https://huggingface.co/datasets/hails/mmlu_no_train/discussions/2 merged ! ✅ \n- https://huggingface.co/datasets/allenai/winogrande/discussions/6 merged ! ✅ \n- https://huggingface.co/datasets/Rowan/hellaswag/discussions/7 merged ! ✅ \n- https://huggingface.co/datasets/lukaemon/bbh/discussions/2 merged ! ✅ ",
"Thank you very much @lhoestq , I will try next week 👍 ",
"I get this error when using datasets 3.5.1 to load a dataset saved with datasets 4.0.0. If you are hitting this issue, make sure that both dataset saving code and the loading code are <4.0.0 or >=4.0.0.",
"This broke several lm-eval-harness workflows for me and reverting to older versions of datasets is not fixing the issue, does anyone have a workaround?",
"> I get this error when using datasets 3.5.1 to load a dataset saved with datasets 4.0.0. If you are hitting this issue, make sure that both dataset saving code and the loading code are <4.0.0 or >=4.0.0.\n\n`datasets` 4.0 can load datasets saved using any older version. But the other way around is not always true: if you save a dataset with `datasets` 4.0 it may use the new `List` type that requires 4.0 and raise `ValueError: Feature type 'List' not found.`\n\nHowever issues with lm eval harness seem to come from another issue: unsupported dataset scripts (see https://github.com/huggingface/datasets/issues/7676#issuecomment-3057550659)\n\n> This broke several lm-eval-harness workflows for me and reverting to older versions of datasets is not fixing the issue, does anyone have a workaround?\n\nwhen reverting to an old `datasets` version I'd encourage you to clear your cache (by default it is located at `~/.cache/huggingface/datasets`) otherwise it might try to load a `List` type that didn't exist in old versions",
"All the impacted datasets in lm eval harness have been fixed thanks to the reactivity of dataset authors ! let me know if you encounter issues with other datasets :)",
"Hello folks, I have found `patrickvonplaten/librispeech_asr_dummy` to be another dataset that is currently broken since the 4.0.0 release. Is there a PR on this as well?",
"https://huggingface.co/datasets/microsoft/prototypical-hai-collaborations seems to be impacted as well.\n\n```\n_temp = load_dataset(\"microsoft/prototypical-hai-collaborations\", \"wildchat1m_en3u-task_anns\")\n``` \nleads to \n`ValueError: Feature type 'List' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']`",
"`microsoft/prototypical-hai-collaborations` is not impacted, you can load it using both `datasets` 3.6 and 4.0. I also tried on colab to confirm.\n\nOne thing that could explain `ValueError: Feature type 'List' not found.` is maybe if you have loaded and cached this dataset with `datasets` 4.0 and then tried to reload it from cache using 3.6.0.\n\nEDIT: actually I tried and 3.6 can reload datasets cached with 4.0 so I'm not sure why you have this error. Which version of `datasets` are you using ?",
"> Hello folks, I have found patrickvonplaten/librispeech_asr_dummy to be another dataset that is currently broken since the 4.0.0 release. Is there a PR on this as well?\n\nI guess you can use [hf-internal-testing/librispeech_asr_dummy](https://huggingface.co/datasets/hf-internal-testing/librispeech_asr_dummy) instead of `patrickvonplaten/librispeech_asr_dummy`, or ask the dataset author to convert their dataset to Parquet"
] |
3,216,699,094
| 7,675
|
common_voice_11_0.py failure in dataset library
|
open
| 2025-07-09T17:47:59
| 2025-07-22T09:35:42
| null |
https://github.com/huggingface/datasets/issues/7675
| null |
egegurel
| false
|
[
"Hi ! This dataset is not in a supported format and `datasets` 4 doesn't support datasets that based on python scripts which are often source of errors. Feel free to ask the dataset authors to convert the dataset to a supported format at https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/discussions, e.g. parquet.\n\nIn the meantime you can pin old versions of `datasets` like `datasets==3.6.0`",
"Thanks @lhoestq! I encountered the same issue and switching to an older version of `datasets` worked.",
">which version of datasets worked for you, I tried switching to 4.6.0 and also moved back for fsspec, but still facing issues for this.\n\n",
"Try datasets<=3.6.0",
"same issue "
] |
3,216,251,069
| 7,674
|
set dev version
|
closed
| 2025-07-09T15:01:25
| 2025-07-09T15:04:01
| 2025-07-09T15:01:33
|
https://github.com/huggingface/datasets/pull/7674
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7674",
"html_url": "https://github.com/huggingface/datasets/pull/7674",
"diff_url": "https://github.com/huggingface/datasets/pull/7674.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7674.patch",
"merged_at": "2025-07-09T15:01:33"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7674). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,216,075,633
| 7,673
|
Release: 4.0.0
|
closed
| 2025-07-09T14:03:16
| 2025-07-09T14:36:19
| 2025-07-09T14:36:18
|
https://github.com/huggingface/datasets/pull/7673
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7673",
"html_url": "https://github.com/huggingface/datasets/pull/7673",
"diff_url": "https://github.com/huggingface/datasets/pull/7673.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7673.patch",
"merged_at": "2025-07-09T14:36:18"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7673). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,215,287,164
| 7,672
|
Fix double sequence
|
closed
| 2025-07-09T09:53:39
| 2025-07-09T09:56:29
| 2025-07-09T09:56:28
|
https://github.com/huggingface/datasets/pull/7672
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7672",
"html_url": "https://github.com/huggingface/datasets/pull/7672",
"diff_url": "https://github.com/huggingface/datasets/pull/7672.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7672.patch",
"merged_at": "2025-07-09T09:56:27"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7672). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,213,223,886
| 7,671
|
Mapping function not working if the first example is returned as None
|
closed
| 2025-07-08T17:07:47
| 2025-07-09T12:30:32
| 2025-07-09T12:30:32
|
https://github.com/huggingface/datasets/issues/7671
| null |
dnaihao
| false
|
[
"Hi, map() always expect an output.\n\nIf you wish to filter examples, you should use filter(), in your case it could be something like this:\n\n```python\nds = ds.map(my_processing_function).filter(ignore_long_prompts)\n```",
"Realized this! Thanks a lot, I will close this issue then."
] |
3,208,962,372
| 7,670
|
Fix audio bytes
|
closed
| 2025-07-07T13:05:15
| 2025-07-07T13:07:47
| 2025-07-07T13:05:33
|
https://github.com/huggingface/datasets/pull/7670
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7670",
"html_url": "https://github.com/huggingface/datasets/pull/7670",
"diff_url": "https://github.com/huggingface/datasets/pull/7670.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7670.patch",
"merged_at": "2025-07-07T13:05:33"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7670). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,203,541,091
| 7,669
|
How can I add my custom data to huggingface datasets
|
open
| 2025-07-04T19:19:54
| 2025-07-05T18:19:37
| null |
https://github.com/huggingface/datasets/issues/7669
| null |
xiagod
| false
|
[
"Hey @xiagod \n\nThe easiest way to add your custom data to Hugging Face Datasets is to use the built-in load_dataset function with your local files. Some examples include:\n\nCSV files:\nfrom datasets import load_dataset\ndataset = load_dataset(\"csv\", data_files=\"my_file.csv\")\n\nJSON or JSONL files:\nfrom datasets import load_dataset\ndataset = load_dataset(\"json\", data_files=\"my_file.json\")\n\n\nImages stored in folders (e.g. data/train/cat/, data/train/dog/):\nfrom datasets import load_dataset\ndataset = load_dataset(\"imagefolder\", data_dir=\"/path/to/pokemon\")\n\n\nThese methods let you quickly create a custom dataset without needing to write a full script.\n\nMore information can be found in Hugging Face's tutorial \"Create a dataset\" or \"Load\" documentation here: \n\nhttps://huggingface.co/docs/datasets/create_dataset \n\nhttps://huggingface.co/docs/datasets/loading#local-and-remote-files\n\n\n\nIf you want to submit your dataset to the Hugging Face Datasets GitHub repo so others can load it follow this guide: \n\nhttps://huggingface.co/docs/datasets/upload_dataset \n\n\n"
] |
3,199,039,322
| 7,668
|
Broken EXIF crash the whole program
|
open
| 2025-07-03T11:24:15
| 2025-07-03T12:27:16
| null |
https://github.com/huggingface/datasets/issues/7668
| null |
Seas0
| false
|
[
"There are other discussions about error handling for images decoding here : https://github.com/huggingface/datasets/issues/7632 https://github.com/huggingface/datasets/issues/7612\n\nand a PR here: https://github.com/huggingface/datasets/pull/7638 (would love your input on the proposed solution !)"
] |
3,196,251,707
| 7,667
|
Fix infer list of images
|
closed
| 2025-07-02T15:07:58
| 2025-07-02T15:10:28
| 2025-07-02T15:08:03
|
https://github.com/huggingface/datasets/pull/7667
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7667",
"html_url": "https://github.com/huggingface/datasets/pull/7667",
"diff_url": "https://github.com/huggingface/datasets/pull/7667.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7667.patch",
"merged_at": "2025-07-02T15:08:03"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7667). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,196,220,722
| 7,666
|
Backward compat list feature
|
closed
| 2025-07-02T14:58:00
| 2025-07-02T15:00:37
| 2025-07-02T14:59:40
|
https://github.com/huggingface/datasets/pull/7666
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7666",
"html_url": "https://github.com/huggingface/datasets/pull/7666",
"diff_url": "https://github.com/huggingface/datasets/pull/7666.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7666.patch",
"merged_at": "2025-07-02T14:59:40"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7666). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,193,239,955
| 7,665
|
Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files
|
closed
| 2025-07-01T17:14:53
| 2025-07-01T17:17:48
| 2025-07-01T17:17:48
|
https://github.com/huggingface/datasets/issues/7665
| null |
zdzichukowalski
| false
|
[
"Somehow I created the issue twice🙈 This one is an exact duplicate of #7664."
] |
3,193,239,035
| 7,664
|
Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files
|
open
| 2025-07-01T17:14:32
| 2025-07-09T13:14:11
| null |
https://github.com/huggingface/datasets/issues/7664
| null |
zdzichukowalski
| false
|
[
"Hey @zdzichukowalski, I was not able to reproduce this on python 3.11.9 and datasets 3.6.0. The contents of \"body\" are correctly parsed as a string and no other fields like timestamps are created. Could you try reproducing this in a fresh environment, or posting the complete code where you encountered that stacktrace? (I noticed in the stacktrace you had a bigger program, perhaps there are some side effects)",
"Hi @zdzichukowalski, thanks for reporting this!\n\nTo help investigate this further, could you please share the following:\n\nExact contents of the data.jsonl file you're using — especially the first few lines that trigger the error.\n\nThe full code snippet you used to run load_dataset(), along with any environment setup (if not already shared).\n\nCan you confirm whether the issue persists when running in a clean virtual environment (e.g., with only datasets, pyarrow, and their dependencies)?\n\nIf possible, could you try running the same with an explicit features schema, like:\n\n```\nfrom datasets import load_dataset, Features, Value\nfeatures = Features({\"body\": Value(\"string\")})\nds = load_dataset(\"json\", data_files=\"data.jsonl\", split=\"train\", features=features)\n```\nAlso, just to clarify — does the \"body\" field contain plain string content, or is it sometimes being parsed from multi-line or structured inputs (like embedded JSON or CSV-like text)?\n\nOnce we have this info, we can check whether this is a schema inference issue, a PyArrow type coercion bug, or something else.",
"Ok I can confirm that I also cannot reproduce the error in a clean environment with the minimized version of the dataset that I provided. Same story for the old environment. Nonetheless the bug still happens in the new environment with the full version of the dataset, which I am providing now. Please let me know if now you can reproduce the problem.\n\nAdditionally I'm attaching result of the `pip freeze` command.\n\n[datasets-issues.jsonl.zip](https://github.com/user-attachments/files/21081755/datasets-issues.jsonl.zip)\n[requirements.txt](https://github.com/user-attachments/files/21081776/requirements.txt)\n\n@ArjunJagdale running with explicit script gives the following stack:\n[stack_features_version.txt](https://github.com/user-attachments/files/21082056/stack_features_version.txt)\n\nThe problematic `body` field seems to be e.g. content of [this comment](https://github.com/huggingface/datasets/issues/5596#issue-1604919993) from Github in which someone provided a stack trace containing json structure ;) I would say that it is intended to be a plain string. \n\nTo find a part that triggers an error, simply search for the \"timestamp[s]\" in the dataset. There are few such entries.\n\nI think I provided all the information you asked. \n\nOh, and workaround I suggested, that is convert `.jsonl` to `.json` worked for me.\n\nP.S\n1. @itsmejul the stack trace I provided is coming from running the two-liner script that I attached. There is no bigger program, although there were some jupiter files alongside the script, which were run in the same env. I am not sure what part of the stack trace suggests that there is something more ;) \n\n2. Is it possible that on some layer in the python/env/jupiter there is some caching mechanism for files that would give false results for my minimized version of the dataset file? There is of course possibility that I made a mistake and run the script with the wrong file, but I double and triple checked things before creating an issue. Earlier I wrote that \"(...) changing the file extension to `.json` or `.txt` avoids the problem\". But with the full version this is not true(when I change to `txt`), and minimized version always works. So it looks like that when I changed the extension to e.g. `txt` then a minimized file loaded from the disk and it was parsed correctly, but every time when I changed back to `jsonl` my script must have used an original content of the file - the one before I made a minimization. But this is still all strange because I even removed the fields before and after the body from my minimized `jsonl` and there were some different errors(I mention it in my original post), so I do not get why today I cannot reproduce it in the original env... \n\n",
"Hi @zdzichukowalski, thanks again for the detailed info and files!\n\nI’ve reviewed the `datasets-issues.jsonl` you shared, and I can now confirm the issue with full clarity:\n\nSome entries in the `\"body\"` field contain string content that resembles schema definitions — for example:\n\n```\nstruct<type: string, action: string, datetime: timestamp[s], ...>\n```\n\nThese strings appear to be copied from GitHub comments or stack traces (e.g., from #5596)\n\nWhen using the `.jsonl` format, `load_dataset()` relies on row-wise schema inference via PyArrow. If some rows contain real structured fields like `pull_request.merged_at` (a valid timestamp), and others contain schema-like text inside string fields, PyArrow can get confused while unifying the schema — leading to cast errors.\n\nThat’s why:\n\n* Using a reduced schema like `features={\"body\": Value(\"string\")}` fails — because the full table has many more fields.\n* Converting the file to `.json` (a list of objects) works — because global schema inference kicks in.\n* Filtering the dataset to only the `body` field avoids the issue entirely.\n\n### Suggested Workarounds\n\n* Convert the `.jsonl` file to `.json` to enable global schema inference.\n* Or, preprocess the `.jsonl` file to extract only the `\"body\"` field if that’s all you need.",
"So in summary should we treat it as a low severity bug in `PyArrow`, in `Datasets` library, or as a proper behavior and do nothing with it?",
"You are right actually! I’d also categorize this as a low-severity schema inference edge case, mainly stemming from PyArrow, but exposed by how datasets handles .jsonl inputs.\n\nIt's not a bug in datasets per se, but confusing when string fields (like body) contain text that resembles schema — e.g., \"timestamp[s]\".\n\nMaybe @lhoestq — could this be considered as a small feature/improvement?"
] |
3,192,582,371
| 7,663
|
Custom metadata filenames
|
closed
| 2025-07-01T13:50:36
| 2025-07-01T13:58:41
| 2025-07-01T13:58:39
|
https://github.com/huggingface/datasets/pull/7663
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7663",
"html_url": "https://github.com/huggingface/datasets/pull/7663",
"diff_url": "https://github.com/huggingface/datasets/pull/7663.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7663.patch",
"merged_at": "2025-07-01T13:58:39"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7663). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,190,805,531
| 7,662
|
Applying map after transform with multiprocessing will cause OOM
|
open
| 2025-07-01T05:45:57
| 2025-07-10T06:17:40
| null |
https://github.com/huggingface/datasets/issues/7662
| null |
JunjieLl
| false
|
[
"Hi ! `add_column` loads the full column data in memory:\n\nhttps://github.com/huggingface/datasets/blob/bfa497b1666f4c58bd231c440d8b92f9859f3a58/src/datasets/arrow_dataset.py#L6021-L6021\n\na workaround to add the new column is to include the new data in the map() function instead, which only loads one batch at a time",
"> Hi ! `add_column` loads the full column data in memory:\n> \n> [datasets/src/datasets/arrow_dataset.py](https://github.com/huggingface/datasets/blob/bfa497b1666f4c58bd231c440d8b92f9859f3a58/src/datasets/arrow_dataset.py#L6021-L6021)\n> \n> Line 6021 in [bfa497b](/huggingface/datasets/commit/bfa497b1666f4c58bd231c440d8b92f9859f3a58)\n> \n> column_table = InMemoryTable.from_pydict({name: column}, schema=pyarrow_schema) \n> a workaround to add the new column is to include the new data in the map() function instead, which only loads one batch at a time\n\n\nHow about cast_column,since map cannot apply type transformation, e.g. Audio(16000) to Audio(24000)",
"cast_column calls `pyarrow.Table.cast` on the full dataset which I believe the memory usage depends on the source and target types but should be low in general\n\ncasting from Audio(16000) to Audio(24000) is cheap since the source and target arrow types are the same",
"> cast_column calls `pyarrow.Table.cast` on the full dataset which I believe the memory usage depends on the source and target types but should be low in general\n> \n> casting from Audio(16000) to Audio(24000) is cheap since the source and target arrow types are the same\n\nThanks for replying. So the OOM is caused by add_column operation. When I skip the operation, low memory will be achieved. Right?",
"> Hi ! `add_column` loads the full column data in memory:\n> \n> [datasets/src/datasets/arrow_dataset.py](https://github.com/huggingface/datasets/blob/bfa497b1666f4c58bd231c440d8b92f9859f3a58/src/datasets/arrow_dataset.py#L6021-L6021)\n> \n> Line 6021 in [bfa497b](/huggingface/datasets/commit/bfa497b1666f4c58bd231c440d8b92f9859f3a58)\n> \n> column_table = InMemoryTable.from_pydict({name: column}, schema=pyarrow_schema) \n> a workaround to add the new column is to include the new data in the map() function instead, which only loads one batch at a time\n\n\nNote num_process=1 would not cause OOM. I'm confused.\n\n"
] |
3,190,408,237
| 7,661
|
fix del tqdm lock error
|
open
| 2025-07-01T02:04:02
| 2025-08-13T13:16:44
| null |
https://github.com/huggingface/datasets/pull/7661
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7661",
"html_url": "https://github.com/huggingface/datasets/pull/7661",
"diff_url": "https://github.com/huggingface/datasets/pull/7661.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7661.patch",
"merged_at": null
}
|
Hypothesis-Z
| true
|
[
"let's see which solution is found at https://github.com/huggingface/huggingface_hub/pull/3286 and do the same maybe ?"
] |
3,189,028,251
| 7,660
|
AttributeError: type object 'tqdm' has no attribute '_lock'
|
open
| 2025-06-30T15:57:16
| 2025-07-03T15:14:27
| null |
https://github.com/huggingface/datasets/issues/7660
| null |
Hypothesis-Z
| false
|
[
"Deleting a class (**not instance**) attribute might be invalid in this case, which is `tqdm` doing in `ensure_lock`.\n\n```python\nfrom tqdm import tqdm as old_tqdm\n\nclass tqdm1(old_tqdm):\n def __delattr__(self, attr):\n try:\n super().__delattr__(attr)\n except AttributeError:\n if attr != '_lock':\n print(attr)\n raise\n\nclass Meta(type):\n def __delattr__(cls, name):\n if name == \"_lock\":\n return \n return super().__delattr__(name)\n \nclass tqdm2(old_tqdm, metaclass=Meta):\n pass\n\ndel tqdm2._lock\ndel tqdm1._lock # error\n```\n\nhttps://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/utils/tqdm.py#L104-L122",
"A cheaper option (seems to work in my case): \n```python\nfrom datasets import tqdm as hf_tqdm\nhf_tqdm.set_lock(hf_tqdm.get_lock())\n```"
] |
3,187,882,217
| 7,659
|
Update the beans dataset link in Preprocess
|
closed
| 2025-06-30T09:58:44
| 2025-07-07T08:38:19
| 2025-07-01T14:01:42
|
https://github.com/huggingface/datasets/pull/7659
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7659",
"html_url": "https://github.com/huggingface/datasets/pull/7659",
"diff_url": "https://github.com/huggingface/datasets/pull/7659.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7659.patch",
"merged_at": "2025-07-01T14:01:42"
}
|
HJassar
| true
|
[] |
3,187,800,504
| 7,658
|
Fix: Prevent loss of info.features and column_names in IterableDatasetDict.map when features is None
|
closed
| 2025-06-30T09:31:12
| 2025-07-01T16:26:30
| 2025-07-01T16:26:12
|
https://github.com/huggingface/datasets/pull/7658
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7658",
"html_url": "https://github.com/huggingface/datasets/pull/7658",
"diff_url": "https://github.com/huggingface/datasets/pull/7658.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7658.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[
"Hi!\r\nI haven’t included a test for this change, as the fix is quite small and targeted.\r\nPlease let me know if you’d like a test for this case or if you’d prefer to handle it during review.\r\nThanks!",
"we can't know in advance the `features` after map() (it transforms the data !), so you can reuse the `features` from `info.features`",
"I'll the patch as suggested — `info.features = features` or `self.info.features` — to ensure schema preservation while keeping the logic simple and explicit. WDYT?\r\n",
"info.features should be None in the general case, and replaced by the user's `features` if it's passed explicitly with `map(..., features=...)`\r\n\r\nhttps://github.com/huggingface/datasets/issues/7568 is not an issue we can fix",
"> info.features should be None in the general case, and replaced by the user's `features` if it's passed explicitly with `map(..., features=...)`\r\n> \r\n> #7568 is not an issue we can fix\r\n\r\nThanks for the clarification! Totally makes sense now — I understand that features=None is the expected behavior post-map() unless explicitly passed, and that preserving old schema by default could lead to incorrect assumptions.\r\nClosing this one — appreciate the feedback as always"
] |
3,186,036,016
| 7,657
|
feat: add subset_name as alias for name in load_dataset
|
open
| 2025-06-29T10:39:00
| 2025-07-18T17:45:41
| null |
https://github.com/huggingface/datasets/pull/7657
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7657",
"html_url": "https://github.com/huggingface/datasets/pull/7657",
"diff_url": "https://github.com/huggingface/datasets/pull/7657.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7657.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[] |
3,185,865,686
| 7,656
|
fix(iterable): ensure MappedExamplesIterable supports state_dict for resume
|
open
| 2025-06-29T07:50:13
| 2025-06-29T07:50:13
| null |
https://github.com/huggingface/datasets/pull/7656
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7656",
"html_url": "https://github.com/huggingface/datasets/pull/7656",
"diff_url": "https://github.com/huggingface/datasets/pull/7656.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7656.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[] |
3,185,382,105
| 7,655
|
Added specific use cases in Improve Performace
|
open
| 2025-06-28T19:00:32
| 2025-06-28T19:00:32
| null |
https://github.com/huggingface/datasets/pull/7655
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7655",
"html_url": "https://github.com/huggingface/datasets/pull/7655",
"diff_url": "https://github.com/huggingface/datasets/pull/7655.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7655.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[] |
3,184,770,992
| 7,654
|
fix(load): strip deprecated use_auth_token from config_kwargs
|
open
| 2025-06-28T09:20:21
| 2025-06-28T09:20:21
| null |
https://github.com/huggingface/datasets/pull/7654
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7654",
"html_url": "https://github.com/huggingface/datasets/pull/7654",
"diff_url": "https://github.com/huggingface/datasets/pull/7654.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7654.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[] |
3,184,746,093
| 7,653
|
feat(load): fallback to `load_from_disk()` when loading a saved dataset directory
|
open
| 2025-06-28T08:47:36
| 2025-06-28T08:47:36
| null |
https://github.com/huggingface/datasets/pull/7653
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7653",
"html_url": "https://github.com/huggingface/datasets/pull/7653",
"diff_url": "https://github.com/huggingface/datasets/pull/7653.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7653.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[] |
3,183,372,055
| 7,652
|
Add columns support to JSON loader for selective key filtering
|
open
| 2025-06-27T16:18:42
| 2025-08-18T15:38:36
| null |
https://github.com/huggingface/datasets/pull/7652
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7652",
"html_url": "https://github.com/huggingface/datasets/pull/7652",
"diff_url": "https://github.com/huggingface/datasets/pull/7652.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7652.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[
"I need this feature right now. It would be great if it could automatically fill in None for non-existent keys instead of reporting an error.",
"> I need this feature right now. It would be great if it could automatically fill in None for non-existent keys instead of reporting an error.\r\n\r\nHi @aihao2000, Just to confirm — I have done the changes you asked for!\r\nIf you pass columns=[\"key1\", \"key2\", \"optional_key\"] to load_dataset(..., columns=...), and any of those keys are missing from the input JSON objects, the loader will automatically fill those columns with None values, instead of raising an error.",
"Hi! any update on this PR?"
] |
3,182,792,775
| 7,651
|
fix: Extended metadata file names for folder_based_builder
|
open
| 2025-06-27T13:12:11
| 2025-06-30T08:19:37
| null |
https://github.com/huggingface/datasets/pull/7651
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7651",
"html_url": "https://github.com/huggingface/datasets/pull/7651",
"diff_url": "https://github.com/huggingface/datasets/pull/7651.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7651.patch",
"merged_at": null
}
|
iPieter
| true
|
[] |
3,182,745,315
| 7,650
|
`load_dataset` defaults to json file format for datasets with 1 shard
|
open
| 2025-06-27T12:54:25
| 2025-06-27T12:54:25
| null |
https://github.com/huggingface/datasets/issues/7650
| null |
iPieter
| false
|
[] |
3,181,481,444
| 7,649
|
Enable parallel shard upload in push_to_hub() using num_proc
|
closed
| 2025-06-27T05:59:03
| 2025-07-07T18:13:53
| 2025-07-07T18:13:52
|
https://github.com/huggingface/datasets/pull/7649
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7649",
"html_url": "https://github.com/huggingface/datasets/pull/7649",
"diff_url": "https://github.com/huggingface/datasets/pull/7649.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7649.patch",
"merged_at": null
}
|
ArjunJagdale
| true
|
[
"it was already added in https://github.com/huggingface/datasets/pull/7606 actually ^^'",
"Oh sure sure, Closing this one as redundant."
] |
3,181,409,736
| 7,648
|
Fix misleading add_column() usage example in docstring
|
closed
| 2025-06-27T05:27:04
| 2025-07-28T19:42:34
| 2025-07-17T13:14:17
|
https://github.com/huggingface/datasets/pull/7648
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7648",
"html_url": "https://github.com/huggingface/datasets/pull/7648",
"diff_url": "https://github.com/huggingface/datasets/pull/7648.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7648.patch",
"merged_at": "2025-07-17T13:14:17"
}
|
ArjunJagdale
| true
|
[
"I believe there are other occurences of cases like this, like select_columns, select, filter, shard and flatten, could you also fix the docstring for them as well before we merge ?",
"Done! @lhoestq! I've updated the docstring examples for the following methods to clarify that they return new datasets instead of modifying in-place:\r\n\r\n- `select_columns`\r\n- `select`\r\n- `filter`\r\n- `shard`\r\n- `flatten`\r\n",
"Also, any suggestions on what kind of issues I should work on next? I tried looking on my own, but I’d be happy if you could assign me something — I’ll do my best!\r\n",
"Hi! any update on this PR?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7648). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Also, any suggestions on what kind of issues I should work on next? I tried looking on my own, but I’d be happy if you could assign me something — I’ll do my best!\r\n\r\nHmm. One long lasting issue is the one about being able to download only one split of a dataset (currently `load_dataset()` downloads all the splits, even when only one of train/test/validation is passed with `load_dataset(..., split=split)`)\r\n\r\nThis makes some downloads pretty long, I remember Mario started to work on this in this PR but couldn't finish it: https://github.com/huggingface/datasets/pull/6832\r\n\r\nI think it would be a challenging but pretty impactful addition, and feel free to ping me if you have questions or if I can help. You can also take a look at Mario's first PR which was already in an advanced state. \r\n\r\nLet me know if it sounds like the kind of contribution you're looking for :)",
"Hi @lhoestq, thanks for the thoughtful suggestion!\r\n\r\nThe issue you mentioned sounds like a meaningful problem to tackle, and I’d love to take a closer look at it. I’ll start by reviewing Mario’s PR (#6832), understand what was implemented so far, and what remains to be done.\r\n\r\nIf I have any questions or run into anything unclear, I’ll be sure to reach out. \r\n\r\nI plan to give this a solid try. Thanks again — contributing to Hugging Face is something I truly hope to grow into.\r\n\r\n---\r\nOnce again the the main Issue is to - \r\n\r\n>Allow users to download only the requested split(s) in load_dataset(...), avoiding unnecessary processing/downloading of the full dataset (especially important for large datasets like svhn, squad, glue).\r\n\r\nright?\r\n\r\nAlso I have gone through some related / mentioned issues and PRs - \r\n\r\n- PR #6832 | Mario's main implementation for per-split download logic. Introduces splits param, _available_splits, and conditional logic in download_and_prepare()\r\n\r\n- PR #6639 | Your earlier PR to trigger download_and_prepare() only when splits are missing from disk\r\n\r\n- Issue #4101 / #2538 / #6529 | Real-world user complaints about load_dataset(..., split=...) still downloading everything. Confirm the need for this fix\r\n\r\n- #2249 | Referenced by albertvillanova — old idea of caching only specific splits\r\n\r\n---\r\nIF I am not wrong, #2249 had some limitations - \r\n- Only worked for some dataset scripts where the download dict had split names as keys (like natural_questions).\r\n\r\n- Would fail or cause confusing behavior on datasets with: \r\n1] Custom download keys (TRAIN_DOWNLOAD_URL, val_nyt, metadata)\r\n2] Files passed one by one to dl_manager.download(), not as a dict\r\n\r\n- Reused DownloadConfig, which led to blurry separation between cached_path, DownloadManager, and dataset logic.\r\n\r\n- Needed to modify each dataset's _split_generators() to fully support split filtering.\r\n\r\n- Risked partial or inconsistent caching if logic wasn’t tight.\r\n",
"> Hi @lhoestq, thanks for the thoughtful suggestion!\r\n> \r\n> The issue you mentioned sounds like a meaningful problem to tackle, and I’d love to take a closer look at it. I’ll start by reviewing Mario’s PR (#6832), understand what was implemented so far, and what remains to be done.\r\n> \r\n> If I have any questions or run into anything unclear, I’ll be sure to reach out.\r\n> \r\n> I plan to give this a solid try. Thanks again — contributing to Hugging Face is something I truly hope to grow into.\r\n> \r\n> Once again the the main Issue is to -\r\n> \r\n> > Allow users to download only the requested split(s) in load_dataset(...), avoiding unnecessary processing/downloading of the full dataset (especially important for large datasets like svhn, squad, glue).\r\n> \r\n> right?\r\n> \r\n> Also I have gone through some related / mentioned issues and PRs -\r\n> \r\n> * PR [Support downloading specific splits in `load_dataset` #6832](https://github.com/huggingface/datasets/pull/6832) | Mario's main implementation for per-split download logic. Introduces splits param, _available_splits, and conditional logic in download_and_prepare()\r\n> * PR [Run download_and_prepare if missing splits #6639](https://github.com/huggingface/datasets/pull/6639) | Your earlier PR to trigger download_and_prepare() only when splits are missing from disk\r\n> * Issue [How can I download only the train and test split for full numbers using load_dataset()? #4101](https://github.com/huggingface/datasets/issues/4101) / [Loading partial dataset when debugging #2538](https://github.com/huggingface/datasets/issues/2538) / [Impossible to only download a test split #6529](https://github.com/huggingface/datasets/issues/6529) | Real-world user complaints about load_dataset(..., split=...) still downloading everything. Confirm the need for this fix\r\n> * [Allow downloading/processing/caching only specific splits #2249](https://github.com/huggingface/datasets/pull/2249) | Referenced by albertvillanova — old idea of caching only specific splits\r\n> \r\n> IF I am not wrong, #2249 had some limitations -\r\n> \r\n> * Only worked for some dataset scripts where the download dict had split names as keys (like natural_questions).\r\n> * Would fail or cause confusing behavior on datasets with:\r\n> 1] Custom download keys (TRAIN_DOWNLOAD_URL, val_nyt, metadata)\r\n> 2] Files passed one by one to dl_manager.download(), not as a dict\r\n> * Reused DownloadConfig, which led to blurry separation between cached_path, DownloadManager, and dataset logic.\r\n> * Needed to modify each dataset's _split_generators() to fully support split filtering.\r\n> * Risked partial or inconsistent caching if logic wasn’t tight.\r\n\r\nAlso This one is in the charge now - https://github.com/huggingface/datasets/pull/7706#issue-3271129240"
] |
End of preview. Expand
in Data Studio
Hugging Face Github Issues
This dataset contains 5000 GitHub issues collected from Hugging Face repositories.
It includes issue metadata, content, labels, user information, timestamps, and comments.
The dataset is suitable for text classification, multi-label classification, and document retrieval tasks.
Dataset Structure
Columns
id— Internal ID of the issue (int64)number— GitHub issue number (int64)title— Title of the issue (string)state— Issue state: open/closed (string)created_at— Timestamp when the issue was created (timestamp[s])updated_at— Timestamp when the issue was last updated (timestamp[s])closed_at— Timestamp when the issue was closed (timestamp[s])html_url— URL to the GitHub issue (string)pull_request— Struct containing PR info (if the issue is a PR):url— URL to PRhtml_url— HTML URL of PRdiff_url— Diff URLpatch_url— Patch URLmerged_at— Merge timestamp (timestamp[s])
user_login— Login of the issue creator (string)is_pull_request— Whether the issue is a pull request (bool)comments— List of comments on the issue (list[string])
Splits
train— 5000 examples
Supported Tasks
- Text Classification: Predict labels or categories of issues
- Multi-label Classification: Issues may have multiple labels
- Document Retrieval: Retrieve relevant issues based on a query
Languages
- English
Dataset Creation
The dataset was collected using the GitHub API, including all issue metadata and comments.
Usage Example
from datasets import load_dataset
dataset = load_dataset("cicboy/github-issues", split="train")
# Preview first 5 examples
for i, example in enumerate(dataset[:5]):
print(f"Issue #{example['number']}: {example['title']}")
print(f"Created at: {example['created_at']}, Closed at: {example['closed_at']}")
print(f"User: {example['user_login']}, PR: {example['is_pull_request']}")
print(f"Comments: {example['comments'][:3]}") # first 3 comments
print()
##Citation
@misc{cicboy_github_issues,
author = {Cicboy},
title = {Hugging Face Github Issues Dataset},
year = {2025},
howpublished = {\url{https://huggingface.co/datasets/cicboy/github-issues}}
}
- Downloads last month
- 28