id
int64 953M
3.35B
| number
int64 2.72k
7.75k
| title
stringlengths 1
290
| state
stringclasses 2
values | created_at
timestamp[s]date 2021-07-26 12:21:17
2025-08-23 00:18:43
| updated_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-23 12:34:39
| closed_at
timestamp[s]date 2021-07-26 13:27:59
2025-08-20 16:35:55
⌀ | html_url
stringlengths 49
51
| pull_request
dict | user_login
stringlengths 3
26
| is_pull_request
bool 2
classes | comments
listlengths 0
30
|
|---|---|---|---|---|---|---|---|---|---|---|---|
3,027,024,285
| 7,544
|
Add try_original_type to DatasetDict.map
|
closed
| 2025-04-29T04:39:44
| 2025-05-05T14:42:49
| 2025-05-05T14:42:49
|
https://github.com/huggingface/datasets/pull/7544
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7544",
"html_url": "https://github.com/huggingface/datasets/pull/7544",
"diff_url": "https://github.com/huggingface/datasets/pull/7544.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7544.patch",
"merged_at": "2025-05-05T14:42:49"
}
|
yoshitomo-matsubara
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7544). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Sure! I just committed the changes",
"@lhoestq \r\nLet me know if there are other things to do before merge or other places to add `try_original_type` argument "
] |
3,026,867,706
| 7,543
|
The memory-disk mapping failure issue of the map function(resolved, but there are some suggestions.)
|
closed
| 2025-04-29T03:04:59
| 2025-04-30T02:22:17
| 2025-04-30T02:22:17
|
https://github.com/huggingface/datasets/issues/7543
| null |
jxma20
| false
|
[] |
3,025,054,630
| 7,542
|
set dev version
|
closed
| 2025-04-28T14:03:48
| 2025-04-28T14:08:37
| 2025-04-28T14:04:00
|
https://github.com/huggingface/datasets/pull/7542
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7542",
"html_url": "https://github.com/huggingface/datasets/pull/7542",
"diff_url": "https://github.com/huggingface/datasets/pull/7542.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7542.patch",
"merged_at": "2025-04-28T14:04:00"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7542). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,025,045,919
| 7,541
|
release: 3.5.1
|
closed
| 2025-04-28T14:00:59
| 2025-04-28T14:03:38
| 2025-04-28T14:01:54
|
https://github.com/huggingface/datasets/pull/7541
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7541",
"html_url": "https://github.com/huggingface/datasets/pull/7541",
"diff_url": "https://github.com/huggingface/datasets/pull/7541.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7541.patch",
"merged_at": "2025-04-28T14:01:54"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7541). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,024,862,966
| 7,540
|
support pyarrow 20
|
closed
| 2025-04-28T13:01:11
| 2025-04-28T13:23:53
| 2025-04-28T13:23:52
|
https://github.com/huggingface/datasets/pull/7540
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7540",
"html_url": "https://github.com/huggingface/datasets/pull/7540",
"diff_url": "https://github.com/huggingface/datasets/pull/7540.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7540.patch",
"merged_at": "2025-04-28T13:23:52"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7540). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,023,311,163
| 7,539
|
Fix IterableDataset state_dict shard_example_idx counting
|
closed
| 2025-04-27T20:41:18
| 2025-05-06T14:24:25
| 2025-05-06T14:24:24
|
https://github.com/huggingface/datasets/pull/7539
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7539",
"html_url": "https://github.com/huggingface/datasets/pull/7539",
"diff_url": "https://github.com/huggingface/datasets/pull/7539.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7539.patch",
"merged_at": null
}
|
Harry-Yang0518
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7539). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi ! FYI I made a PR to fix https://github.com/huggingface/datasets/issues/7538 and it also fixed https://github.com/huggingface/datasets/issues/7475, so if I'm not mistaken this PR is not needed anymore"
] |
3,023,280,056
| 7,538
|
`IterableDataset` drops samples when resuming from a checkpoint
|
closed
| 2025-04-27T19:34:49
| 2025-05-06T14:04:05
| 2025-05-06T14:03:42
|
https://github.com/huggingface/datasets/issues/7538
| null |
mariosasko
| false
|
[
"Thanks for reporting ! I fixed the issue using RebatchedArrowExamplesIterable before the formatted iterable"
] |
3,018,792,966
| 7,537
|
`datasets.map(..., num_proc=4)` multi-processing fails
|
open
| 2025-04-25T01:53:47
| 2025-05-06T13:12:08
| null |
https://github.com/huggingface/datasets/issues/7537
| null |
faaany
| false
|
[
"related: https://github.com/huggingface/datasets/issues/7510\n\nwe need to do more tests to see if latest `dill` is deterministic"
] |
3,018,425,549
| 7,536
|
[Errno 13] Permission denied: on `.incomplete` file
|
closed
| 2025-04-24T20:52:45
| 2025-05-06T13:05:01
| 2025-05-06T13:05:01
|
https://github.com/huggingface/datasets/issues/7536
| null |
ryan-clancy
| false
|
[
"It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (using filelock for example)",
"> It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (using filelock for example)\n\n@lhoestq is this something which can go in a 3.5.1 release?",
"Yes for sure",
"@lhoestq - can you take a look at https://github.com/huggingface/datasets/pull/7547/?"
] |
3,018,289,872
| 7,535
|
Change dill version in requirements
|
open
| 2025-04-24T19:44:28
| 2025-05-19T14:51:29
| null |
https://github.com/huggingface/datasets/pull/7535
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7535",
"html_url": "https://github.com/huggingface/datasets/pull/7535",
"diff_url": "https://github.com/huggingface/datasets/pull/7535.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7535.patch",
"merged_at": null
}
|
JGrel
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7535). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
3,017,259,407
| 7,534
|
TensorFlow RaggedTensor Support (batch-level)
|
open
| 2025-04-24T13:14:52
| 2025-06-30T17:03:39
| null |
https://github.com/huggingface/datasets/issues/7534
| null |
Lundez
| false
|
[
"Keras doesn't support other inputs other than tf.data.Dataset objects ? it's a bit painful to have to support and maintain this kind of integration\n\nIs there a way to use a `datasets.Dataset` with outputs formatted as tensors / ragged tensors instead ? like in https://huggingface.co/docs/datasets/use_with_tensorflow#dataset-format",
"I'll give it a try when I get the time. But quite sure I already tested the `with_format` approach.\n\nKeras when using TF as backend converts the datasets into `tf.data.Dataset`, much like you do.",
"Hi @Lundez! Thanks for raising this — very valid point, especially for Object Detection use-cases.\n\nYou're right that np_get_batch currently enforces numpy batching, which breaks RaggedTensor support due to its inability to handle nested structures. This likely needs a redesign to allow TensorFlow-native batching in specific formats.\n\nBefore diving into a code change though, could you confirm:\n\nDoes `.with_format(\"tensorflow\")` (without batching) return a `tf.data.Dataset` that works if batching is deferred to `model.fit()`?\n\nHave you tried something like:\n\n```python\ntf_dataset = dataset.with_format(\"tensorflow\").to_tf_dataset(\n columns=[\"image\", \"labels\"],\n label_cols=None,\n batch_size=None # No batching here\n)\nmodel.fit(tf_dataset.batch(BATCH_SIZE)) # Use RaggedTensor batching here\n```\n\nIf this works, it might be worth updating the documentation rather than changing batching logic inside datasets itself.\n\nThat said, happy to explore changes if batching needs to be supported natively for RaggedTensor. Just flagging that it’d require some careful design due to existing numpy assumptions.",
"Hi, we've had to move on for now. \n\nWe have actually also moved to dense tensors to make it possible to xla complie the training. \n\nBut I'll check when I'm back from vacation which is far into the future. \n\nThanks"
] |
3,015,075,086
| 7,533
|
Add custom fingerprint support to `from_generator`
|
open
| 2025-04-23T19:31:35
| 2025-08-14T19:41:25
| null |
https://github.com/huggingface/datasets/pull/7533
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7533",
"html_url": "https://github.com/huggingface/datasets/pull/7533",
"diff_url": "https://github.com/huggingface/datasets/pull/7533.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7533.patch",
"merged_at": null
}
|
simonreise
| true
|
[
"This is great !\r\n\r\nWhat do you think of passing `config_id=` directly to the builder instead of just the suffix ? This would be a power user argument though, or for internal use. And in from_generator the new argument can be `fingerprint=` as in `Dataset.__init__()`\r\n\r\nThe `config_id` can be defined using something like `config_id = \"default-fingerprint=\" + fingerprint`\r\n\r\nI feel ike this could make the Dataset API more coherent if we avoid introducing a new argument while we can juste use `fingerprint=`",
"@lhoestq could you please re-review the changes I made?",
"@lhoestq ping\r\nI also added a simple test for the `fingerprint` parameter"
] |
3,009,546,204
| 7,532
|
Document the HF_DATASETS_CACHE environment variable in the datasets cache documentation
|
closed
| 2025-04-22T00:23:13
| 2025-05-06T15:54:38
| 2025-05-06T15:54:38
|
https://github.com/huggingface/datasets/pull/7532
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7532",
"html_url": "https://github.com/huggingface/datasets/pull/7532",
"diff_url": "https://github.com/huggingface/datasets/pull/7532.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7532.patch",
"merged_at": "2025-05-06T15:54:38"
}
|
Harry-Yang0518
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7532). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Your clarification in your comment at https://github.com/huggingface/datasets/issues/7480#issuecomment-2833640084 sounds great, would you like to update this PR to include it ?",
"Hi @lhoestq, I’ve updated the documentation to reflect the clarifications discussed in #7480. Let me know if anything else is needed!\r\n"
] |
3,008,914,887
| 7,531
|
Deepspeed reward training hangs at end of training with Dataset.from_list
|
open
| 2025-04-21T17:29:20
| 2025-06-29T06:20:45
| null |
https://github.com/huggingface/datasets/issues/7531
| null |
Matt00n
| false
|
[
"Hi ! How big is the dataset ? if you load it using `from_list`, the dataset lives in memory and has to be copied to every gpu process, which can be slow.\n\nIt's fasted if you load it from JSON files from disk, because in that case the dataset in converted to Arrow and loaded from disk using memory mapping. Memory mapping allows to quickly reload the dataset in other processes.\n\nMaybe we can change `from_list` and other methods to always use the disk though, instead of loading in memory, WDYT ?",
"Thanks for raising this! As lhoestq mentioned, the root cause seems to be that `Dataset.from_list()` creates an in-memory dataset, which causes issues with DeepSpeed across multiple GPUs due to the cost of copying that memory to all processes.\n\nUsing `load_dataset(\"json\", ...)` works because Hugging Face datasets then convert the data to Apache Arrow and use **memory mapping**, which avoids this copying overhead.\n\nPossible improvement could be to add an option like `use_disk=True` to `Dataset.from_list()` to allow users to write to Arrow + memory-map the dataset, enabling compatibility with multi-process settings like DeepSpeed, while keeping the current fast behavior by default.\n\nWould love to hear if this direction sounds acceptable before attempting a PR.\n"
] |
3,007,452,499
| 7,530
|
How to solve "Spaces stuck in Building" problems
|
closed
| 2025-04-21T03:08:38
| 2025-04-22T07:49:52
| 2025-04-22T07:49:52
|
https://github.com/huggingface/datasets/issues/7530
| null |
ghost
| false
|
[
"I'm facing the same issue—Space stuck in \"Building\" even after restart and Factory rebuild. Any fix?\n",
"> I'm facing the same issue—Space stuck in \"Building\" even after restart and Factory rebuild. Any fix?\n\nAlso see https://github.com/huggingface/huggingface_hub/issues/3019",
"I'm facing the same issue. The build fails with the same error, and restarting won't help. Is there a fix or ETA? "
] |
3,007,118,969
| 7,529
|
audio folder builder cannot detect custom split name
|
open
| 2025-04-20T16:53:21
| 2025-04-20T16:53:21
| null |
https://github.com/huggingface/datasets/issues/7529
| null |
phineas-pta
| false
|
[] |
3,006,433,485
| 7,528
|
Data Studio Error: Convert JSONL incorrectly
|
open
| 2025-04-19T13:21:44
| 2025-05-06T13:18:38
| null |
https://github.com/huggingface/datasets/issues/7528
| null |
zxccade
| false
|
[
"Hi ! Your JSONL file is incompatible with Arrow / Parquet. Indeed in Arrow / Parquet every dict should have the same keys, while in your dataset the bboxes have varying keys.\n\nThis causes the Data Studio to treat the bboxes as if each row was missing the keys from other rows.\n\nFeel free to take a look at the docs on object segmentation to see how to format a dataset with bboxes: https://huggingface.co/docs/datasets/object_detection"
] |
3,005,242,422
| 7,527
|
Auto-merge option for `convert-to-parquet`
|
closed
| 2025-04-18T16:03:22
| 2025-07-18T19:09:03
| 2025-07-18T19:09:03
|
https://github.com/huggingface/datasets/issues/7527
| null |
klamike
| false
|
[
"Alternatively, there could be an option to switch from submitting PRs to just committing changes directly to `main`.",
"Why not, I'd be in favor of `--merge-pull-request` to call `HfApi().merge_pull_request()` at the end of the conversion :) feel free to open a PR if you'd like",
"#self-assign",
"Closing since convert to parquet has been removed... https://github.com/huggingface/datasets/pull/7592#issuecomment-3073053138"
] |
3,005,107,536
| 7,526
|
Faster downloads/uploads with Xet storage
|
open
| 2025-04-18T14:46:42
| 2025-05-12T12:09:09
| null |
https://github.com/huggingface/datasets/issues/7526
| null |
lhoestq
| false
|
[] |
3,003,032,248
| 7,525
|
Fix indexing in split commit messages
|
closed
| 2025-04-17T17:06:26
| 2025-04-28T14:26:27
| 2025-04-28T14:26:27
|
https://github.com/huggingface/datasets/pull/7525
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7525",
"html_url": "https://github.com/huggingface/datasets/pull/7525",
"diff_url": "https://github.com/huggingface/datasets/pull/7525.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7525.patch",
"merged_at": null
}
|
klamike
| true
|
[
"Hi ! this is expected and is coherent with other naming conventions in `datasets` such as parquet shards naming"
] |
3,002,067,826
| 7,524
|
correct use with polars example
|
closed
| 2025-04-17T10:19:19
| 2025-04-28T13:48:34
| 2025-04-28T13:48:33
|
https://github.com/huggingface/datasets/pull/7524
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7524",
"html_url": "https://github.com/huggingface/datasets/pull/7524",
"diff_url": "https://github.com/huggingface/datasets/pull/7524.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7524.patch",
"merged_at": "2025-04-28T13:48:33"
}
|
SiQube
| true
|
[] |
2,999,616,692
| 7,523
|
mention av in video docs
|
closed
| 2025-04-16T13:11:12
| 2025-04-16T13:13:45
| 2025-04-16T13:11:42
|
https://github.com/huggingface/datasets/pull/7523
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7523",
"html_url": "https://github.com/huggingface/datasets/pull/7523",
"diff_url": "https://github.com/huggingface/datasets/pull/7523.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7523.patch",
"merged_at": "2025-04-16T13:11:42"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7523). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,998,169,017
| 7,522
|
Preserve formatting in concatenated IterableDataset
|
closed
| 2025-04-16T02:37:33
| 2025-05-19T15:07:38
| 2025-05-19T15:07:37
|
https://github.com/huggingface/datasets/pull/7522
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7522",
"html_url": "https://github.com/huggingface/datasets/pull/7522",
"diff_url": "https://github.com/huggingface/datasets/pull/7522.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7522.patch",
"merged_at": "2025-05-19T15:07:37"
}
|
francescorubbo
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7522). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,997,666,366
| 7,521
|
fix: Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames (#7517)
|
closed
| 2025-04-15T21:23:58
| 2025-05-07T14:17:29
| 2025-05-07T14:17:29
|
https://github.com/huggingface/datasets/pull/7521
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7521",
"html_url": "https://github.com/huggingface/datasets/pull/7521",
"diff_url": "https://github.com/huggingface/datasets/pull/7521.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7521.patch",
"merged_at": "2025-05-07T14:17:29"
}
|
giraffacarp
| true
|
[
"@lhoestq let me know if you prefer to change the spark iterator so it outputs `bytes`",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7521). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,997,422,044
| 7,520
|
Update items in the dataset without `map`
|
open
| 2025-04-15T19:39:01
| 2025-04-19T18:47:46
| null |
https://github.com/huggingface/datasets/issues/7520
| null |
mashdragon
| false
|
[
"Hello!\n\nHave you looked at `Dataset.shard`? [Docs](https://huggingface.co/docs/datasets/en/process#shard)\n\nUsing this method you could break your dataset in N shards. Apply `map` on each shard and concatenate them back."
] |
2,996,458,961
| 7,519
|
pdf docs fixes
|
closed
| 2025-04-15T13:35:56
| 2025-04-15T13:38:31
| 2025-04-15T13:36:03
|
https://github.com/huggingface/datasets/pull/7519
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7519",
"html_url": "https://github.com/huggingface/datasets/pull/7519",
"diff_url": "https://github.com/huggingface/datasets/pull/7519.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7519.patch",
"merged_at": "2025-04-15T13:36:03"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7519). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,996,141,825
| 7,518
|
num_proc parallelization works only for first ~10s.
|
open
| 2025-04-15T11:44:03
| 2025-04-15T13:12:13
| null |
https://github.com/huggingface/datasets/issues/7518
| null |
pshishodiaa
| false
|
[
"Hi, can you check if the processes are still alive ? It's a bit weird because `datasets` does check if processes crash and return an error in that case",
"Thank you for reverting quickly. I digged a bit, and realized my disk's IOPS is also limited - which is causing this. will check further and report if it's an issue of hf datasets' side or mine. "
] |
2,996,106,077
| 7,517
|
Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames
|
closed
| 2025-04-15T11:29:17
| 2025-05-07T14:17:30
| 2025-05-07T14:17:30
|
https://github.com/huggingface/datasets/issues/7517
| null |
giraffacarp
| false
|
[
"Hi ! The `Image()` type accepts either\n- a `bytes` object containing the image bytes\n- a `str` object containing the image path\n- a `PIL.Image` object\n\nbut it doesn't support `bytearray`, maybe you can convert to `bytes` beforehand ?",
"Hi @lhoestq, \nconverting to bytes is certainly possible and would work around the error. However, the core issue is that `Dataset` and `IterableDataset` behave differently with the features.\n\nI’d be happy to work on a fix for this issue.",
"I see, that's an issue indeed. Feel free to ping me if I can help with reviews or any guidance\n\nIf it can help, the code that takes a Spark DataFrame and iterates on the rows for `IterableDataset` is here: \n\nhttps://github.com/huggingface/datasets/blob/6a96bf313085d7538a999b929a550e14e1d406c9/src/datasets/packaged_modules/spark/spark.py#L49-L53",
"#self-assign"
] |
2,995,780,283
| 7,516
|
unsloth/DeepSeek-R1-Distill-Qwen-32B server error
|
closed
| 2025-04-15T09:26:53
| 2025-04-15T09:57:26
| 2025-04-15T09:57:26
|
https://github.com/huggingface/datasets/issues/7516
| null |
Editor-1
| false
|
[] |
2,995,082,418
| 7,515
|
`concatenate_datasets` does not preserve Pytorch format for IterableDataset
|
closed
| 2025-04-15T04:36:34
| 2025-05-19T15:07:38
| 2025-05-19T15:07:38
|
https://github.com/huggingface/datasets/issues/7515
| null |
francescorubbo
| false
|
[
"Hi ! Oh indeed it would be cool to return the same format in that case. Would you like to submit a PR ? The function that does the concatenation is here:\n\nhttps://github.com/huggingface/datasets/blob/90e5bf8a8599b625d6103ee5ac83b98269991141/src/datasets/iterable_dataset.py#L3375-L3380",
"Thank you for the pointer, @lhoestq ! See #7522 "
] |
2,994,714,923
| 7,514
|
Do not hash `generator` in `BuilderConfig.create_config_id`
|
closed
| 2025-04-15T01:26:43
| 2025-04-23T11:55:55
| 2025-04-15T16:27:51
|
https://github.com/huggingface/datasets/pull/7514
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7514",
"html_url": "https://github.com/huggingface/datasets/pull/7514",
"diff_url": "https://github.com/huggingface/datasets/pull/7514.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7514.patch",
"merged_at": null
}
|
simonreise
| true
|
[] |
2,994,678,437
| 7,513
|
MemoryError while creating dataset from generator
|
open
| 2025-04-15T01:02:02
| 2025-04-23T19:37:08
| null |
https://github.com/huggingface/datasets/issues/7513
| null |
simonreise
| false
|
[
"Upd: created a PR that can probably solve the problem: #7514",
"Hi ! We need to take the generator into account for the cache. The generator is hashed to make the dataset fingerprint used by the cache. This way you can reload the Dataset from the cache without regenerating in subsequent `from_generator` calls.\n\nMaybe instead of removing generator from the hasher input, we can let users pass their own Dataset fingerprint to `from_generator`, and if it's specified we don't need to hash anything",
"Upd: I successfully generated a dataset from my large geospatial data with `generator` excluded from hashing and saved it to disk without running into memory errors. So, it looks like there are no other bottlenecks in dataset generation in my case\n\nMaybe letting users pass their own fingerprint to skip hashing can be a great solution to that issue!",
"@lhoestq I tried to implement user-defined dataset fingerprint in #7533 . Am I doing it right?"
] |
2,994,043,544
| 7,512
|
.map() fails if function uses pyvista
|
open
| 2025-04-14T19:43:02
| 2025-04-14T20:01:53
| null |
https://github.com/huggingface/datasets/issues/7512
| null |
el-hult
| false
|
[
"I found a similar (?) issue in https://github.com/huggingface/datasets/issues/6435, where someone had issues with forks and CUDA. According to https://huggingface.co/docs/datasets/main/en/process#multiprocessing we should do \n\n```\nfrom multiprocess import set_start_method\nset_start_method(\"spawn\")\n```\n\nto avoid the fork. The updated code\n\n```python\nimport numpy as np\nimport pyvista as pv\nimport datasets\nimport multiprocess\n\ndata = [{\"coords\": np.random.rand(5, 3)} for _ in range(3)]\n\ndef render_point(example):\n plotter = pv.Plotter(off_screen=True)\n cloud = pv.PolyData(example[\"coords\"])\n plotter.add_mesh(cloud)\n img = plotter.screenshot(return_img=True)\n return {\"image\": img}\n\n\n# breaks if num_proc>1\nmultiprocess.set_start_method(\"spawn\")\nds = datasets.Dataset.from_list(data).map(render_point, num_proc=2)\n```\n\ninstead fails with `TypeError: fork_exec() takes exactly 23 arguments (21 given)` which also seems like a bug to me."
] |
2,992,131,117
| 7,510
|
Incompatibile dill version (0.3.9) in datasets 2.18.0 - 3.5.0
|
open
| 2025-04-14T07:22:44
| 2025-05-19T14:54:04
| null |
https://github.com/huggingface/datasets/issues/7510
| null |
JGrel
| false
|
[
"Hi ! We can bump `dill` to 0.3.9 if we make sure it's deterministic and doesn't break the caching mechanism in `datasets`.\n\nWould you be interested in opening a PR ? Then we can run the CI to see if it works",
"Hi!. Yeah I can do it. Should I make any changes besides dill versions?",
"There are probably some usage of internal functions from `dill` that we'll need to update in `datasets`\n\nIf you run `pytest tests/test_fingerprint.py` you should already have a good idea of what works and what doesn't.\nBut feel free to open a PR anyway, this way we can run the full CI and see the results\n",
"Hi, sorry for no response from my side. I will try to do it today.",
"Created pull request: [LINK](https://github.com/huggingface/datasets/pull/7535)\nTried to run tests by using command you have send and got few errors:\n\n",
"Thanks for running the test ! So it appears we have two issues to fix:\n1. 'log' is not defined: it seems an internal `dill` function has disappeared, so we should adapt the `datasets` code that was using it\n2. there are some hashes mismatches, which means `dill` doesn't seem to output the same dump when passed the same ipython function twice, or the same function but located at a different line in a python file"
] |
2,991,484,542
| 7,509
|
Dataset uses excessive memory when loading files
|
open
| 2025-04-13T21:09:49
| 2025-04-28T15:18:55
| null |
https://github.com/huggingface/datasets/issues/7509
| null |
avishaiElmakies
| false
|
[
"small update: I converted the jsons to parquet and it now works well with 32 proc and the same node. \nI still think this needs to be understood, since json is a very popular and easy-to-use format. ",
"Hi ! The JSON loader loads full files in memory, unless they are JSON Lines. In this case it iterates on the JSON Lines in a memory efficient manner.\n\nI know there is an `ijson` package that works similarly but for general JSON files, maybe it can help and remove the need to load full JSON files in memory",
"Hi, i understand that json files are probably loaded into memory to read them but aren't they released when we write all the file content into arrow or something? ",
"Yes correct, the JSON data is only in memory during the conversion to Arrow. Then, the data is memory mapped from you disk",
"so the json files are all loaded into memory before converting to arrow? or do they convert 1 json at a time and then they are realeased?\nI don't understand how 200GB worth of jsons fill a 378GB node's memory.",
"Each process converts one JSON file at at time, So the total memory usage is num_proc * json_file_size * overhead, where overhead can be around 2 or 3 for the conversion.\n\nSo it's indeed surprising that you run out of memory. Is the dataset available somewhere ? or a subset maybe ?",
"This is a tokenized dataset I created for training a speech-language model with a few features (so it is not private but not easily available). I can send/upload a shard or two and you can copy them however many times you want so you can debug. this should give you something comparable to what I have, but will be easier than creating it yourself. so if you want that, let me know :)",
"Maybe you can measure the memory usage when loading 1 file with num_proc=1 ? This should already be helpful.\n\nMemory usage for tokenized data can be bigger than just text, for example the tokens type can be inferred as int64 and the lists offsets are int32",
"OK, I will try to do this in the near future. I am a little swamped at the moment. do you have a preferred tool?\n\nalso My data is just list of ints, there is no offsets",
"> so the json files are all loaded into memory before converting to arrow? or do they convert 1 json at a time and then they are realeased? I don't understand how 200GB worth of jsons fill a 378GB node's memory.\n\nHello! Is your query solved? I have the same confusion and would like to ask you for advice",
"no, the issue is still present. I converted the json files to parquet, but json seems to have a problem.\n\nUnfortunately i didn't have the time to try and profile the memory usage for 1 file. So if you want to do that, it will be great! ",
"My dataset is about image descriptions, stored as a 20MB JSON file on disk. However, I need to use the map function to preprocess the images, and after computation, the preprocessed dataset amounts to 70GB. My server has 122GB of RAM, but it still runs out of memory (OOM). This issue is very similar to yours.\n\nAfter some research during this period, I found that the map function does not perform disk mapping in memory while working. Using the command find /DataB/mjx -type f -mmin -10, I discovered that no temporary cache files were modified or created during program execution, meaning the data was continuously loaded into memory. After several attempts, I found that adding the parameter cache_file_name=\"your/path\" to the map function can enable memory-disk mapping. This is a strange setting, but after adding this parameter, the memory usage dropped to only 7GB, indicating that once the writer_batch_size worth of data is read into the disk cache, the corresponding data in memory is released. However, I don't think this is the intended behavior by the author, as memory-disk caching should have been enabled without needing this additional parameter.\n\nFinally, here is my map function call. I hope it helps you.\ntrain_data = train_data.map(process_fun, cache_file_name='./cache_file', remove_columns=['image_name', 'question_type', 'concern', 'question', 'candidate_answers', 'answer'])"
] |
2,986,612,934
| 7,508
|
Iterating over Image feature columns is extremely slow
|
open
| 2025-04-10T19:00:54
| 2025-04-15T17:57:08
| null |
https://github.com/huggingface/datasets/issues/7508
| null |
sohamparikh
| false
|
[
"Hi ! Could it be because the `Image()` type in dataset does `image = Image.open(image_path)` and also `image.load()` which actually loads the image data in memory ? This is needed to avoid too many open files issues, see https://github.com/huggingface/datasets/issues/3985",
"Yes, that seems to be it. For my purposes, I've cast the column to `Image(decode=False)`, and only load the images when necessary, which is much much faster"
] |
2,984,309,806
| 7,507
|
Front-end statistical data quantity deviation
|
open
| 2025-04-10T02:51:38
| 2025-04-15T12:54:51
| null |
https://github.com/huggingface/datasets/issues/7507
| null |
rangehow
| false
|
[
"Hi ! the format of this dataset is not supported by the Dataset Viewer. It looks like this dataset was saved using `save_to_disk()` which is meant for local storage / easy reload without compression, not for sharing online."
] |
2,981,687,450
| 7,506
|
HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access Fineweb-10BT on 4A100 GPUs using SLURM
|
open
| 2025-04-09T06:32:04
| 2025-06-29T06:04:59
| null |
https://github.com/huggingface/datasets/issues/7506
| null |
calvintanama
| false
|
[
"Hi ! make sure to be logged in with your HF account (e.g. using `huggingface-cli login` or passing `token=` to `load_dataset()`), otherwise you'll get rate limited at one point",
"Hey @calvintanama! Just building on what @lhoestq mentioned above — I ran into similar issues in multi-GPU SLURM setups and here’s what worked for me...\n\nThis 429 Client Error: Too Many Requests comes from the Hugging Face Hub’s rate limiting, which restricts unauthenticated or high-volume access (especially in multi-GPU/distributed setups like SLURM).\n\nAs @lhoestq mentioned, the solution is to make sure you are authenticated with the Hugging Face Hub in every process (especially on each GPU/worker node). You can do this by:\n\nRunning huggingface-cli login (interactive)\n\nOr passing your token explicitly:\n\n```python\nload_dataset(\"HuggingFaceFW/fineweb\", token=\"hf_your_token_here\")\n# If you’re using a SLURM cluster, ensure every node/process receives access to the token via env var:\n```\n\n```bash\nexport HF_TOKEN=hf_your_token_here\n```\n\nand then in Python:\n```python\nfrom datasets import load_dataset\nload_dataset(\"HuggingFaceFW/fineweb\", token=os.environ[\"HF_TOKEN\"])\n```\nAlso consider downloading the dataset beforehand with load_dataset(..., streaming=False) and storing it locally if you're repeatedly training with it."
] |
2,979,926,156
| 7,505
|
HfHubHTTPError: 403 Forbidden: None. Cannot access content at: https://hf.co/api/s3proxy
|
open
| 2025-04-08T14:08:40
| 2025-04-08T14:08:40
| null |
https://github.com/huggingface/datasets/issues/7505
| null |
hissain
| false
|
[] |
2,979,410,641
| 7,504
|
BuilderConfig ParquetConfig(...) doesn't have a 'use_auth_token' key.
|
open
| 2025-04-08T10:55:03
| 2025-06-28T09:18:09
| null |
https://github.com/huggingface/datasets/issues/7504
| null |
tteguayco
| false
|
[
"I encountered the same error, have you resolved it?",
"Hi ! `use_auth_token` has been deprecated and removed some time ago. You should use `token` instead in `load_dataset()`",
"Hi @lhoestq, I'd like to take this up.\n\nAs discussed in #7504, the issue arises when `use_auth_token` is passed to `load_dataset`, which forwards it to the config's `__init__`, where it's no longer a valid key.\n\nTo address this, I’ll intercept and strip `use_auth_token` inside `load_dataset()` (similar to how we handle `trust_remote_code`). A warning will be logged, and users will be encouraged to use `token` instead.\n\nThis avoids breaking older scripts that still use `use_auth_token`."
] |
2,978,512,625
| 7,503
|
Inconsistency between load_dataset and load_from_disk functionality
|
open
| 2025-04-08T03:46:22
| 2025-06-28T08:51:16
| null |
https://github.com/huggingface/datasets/issues/7503
| null |
zzzzzec
| false
|
[
"Hi ! you can find more info here: https://github.com/huggingface/datasets/issues/5044#issuecomment-1263714347\n\n> What's the recommended approach for this use case? Should I manually process my gsm8k-new dataset to make it compatible with load_dataset? Is there a standard way to convert between these formats?\n\nYou can use push_to_hub() or to_parquet() for example",
"Hi @zzzzzec & @lhoestq 👋\n\nThanks for raising and discussing this — I've submitted a patch that improves this exact scenario."
] |
2,977,453,814
| 7,502
|
`load_dataset` of size 40GB creates a cache of >720GB
|
closed
| 2025-04-07T16:52:34
| 2025-04-15T15:22:12
| 2025-04-15T15:22:11
|
https://github.com/huggingface/datasets/issues/7502
| null |
pietrolesci
| false
|
[
"Hi ! Parquet is a compressed format. When you load a dataset, it uncompresses the Parquet data into Arrow data on your disk. That's why you can indeed end up with 720GB of uncompressed data on disk. The uncompression is needed to enable performant dataset objects (especially for random access).\n\nTo save some storage you can instead load the dataset with `streaming=True`. This way you get an `IterableDataset` that reads the Parquet data iteratively without ever writing to disk.\n\nPS: `ReadInstruction` might not be implemented for `streaming=True`, if it's the case you can use `ds.take()` and `ds.skip()` instead",
"Hi @lhoestq, thanks a lot for your answer. This makes perfect sense. I will try using the streaming mode. Closing the issue."
] |
2,976,721,014
| 7,501
|
Nested Feature raises ArrowNotImplementedError: Unsupported cast using function cast_struct
|
closed
| 2025-04-07T12:35:39
| 2025-04-07T12:43:04
| 2025-04-07T12:43:03
|
https://github.com/huggingface/datasets/issues/7501
| null |
yaner-here
| false
|
[
"Solved by the default `load_dataset(features)` parameters. Do not use `Sequence` for the `list` in `list[any]` json schema, just simply use `[]`. For example, `\"b\": Sequence({...})` fails but `\"b\": [{...}]` works fine."
] |
2,974,841,921
| 7,500
|
Make `with_format` correctly indicate that a `Dataset` is compatible with PyTorch's `Dataset` class
|
open
| 2025-04-06T09:56:09
| 2025-04-15T12:57:39
| null |
https://github.com/huggingface/datasets/issues/7500
| null |
benglewis
| false
|
[
"Does the torch `DataLoader` really require the dataset to be a subclass of `torch.utils.data.Dataset` ? Or is there a simpler type we could use ?\n\nPS: also note that a dataset without `with_format()` can also be used in a torch `DataLoader` . Calling `with_format(\"torch\")` simply makes the output of the dataset torch Tensors in an efficient way."
] |
2,973,489,126
| 7,499
|
Added cache dirs to load and file_utils
|
closed
| 2025-04-04T22:36:04
| 2025-05-07T14:07:34
| 2025-05-07T14:07:34
|
https://github.com/huggingface/datasets/pull/7499
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7499",
"html_url": "https://github.com/huggingface/datasets/pull/7499",
"diff_url": "https://github.com/huggingface/datasets/pull/7499.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7499.patch",
"merged_at": null
}
|
gmongaras
| true
|
[
"hi ! the `hf_hub_download` cache_dir is a different cache directory than the one for `datasets`.\r\n\r\n`hf_hub_download` uses the `huggingface_hub` cache which is located in by default in `~/.cache/huggingface/hub`, while `datasets` uses a different cache for Arrow files and map() results `~/.cache/huggingface/datasets`",
"Is there a way to change the default cache directory for both of these on calling load_dataset? Currently, cache_dir makes dealing with where I want files to go a bit confusing as the documentation doesn't mention it only relocates.../datasets and not .../hub.",
"You can set `HF_HOME` which is the common parent directory for those two caches. Or individually `HF_DATASETS_CACHE` and `HF_HUB_CACHE`",
"Got it. Can this be added to the documentation for load_dataset and related functions to avoid confusion with cache_dir?",
"done in https://github.com/huggingface/datasets/pull/7532 :)"
] |
2,969,218,273
| 7,498
|
Extreme memory bandwidth.
|
open
| 2025-04-03T11:09:08
| 2025-04-03T11:11:22
| null |
https://github.com/huggingface/datasets/issues/7498
| null |
J0SZ
| false
|
[] |
2,968,553,693
| 7,497
|
How to convert videos to images?
|
open
| 2025-04-03T07:08:39
| 2025-04-15T12:35:15
| null |
https://github.com/huggingface/datasets/issues/7497
| null |
Loki-Lu
| false
|
[
"Hi ! there is some documentation here on how to read video frames: https://huggingface.co/docs/datasets/video_load"
] |
2,967,345,522
| 7,496
|
Json builder: Allow features to override problematic Arrow types
|
open
| 2025-04-02T19:27:16
| 2025-04-15T13:06:09
| null |
https://github.com/huggingface/datasets/issues/7496
| null |
edmcman
| false
|
[
"Hi ! It would be cool indeed, currently the JSON data are generally loaded here: \n\nhttps://github.com/huggingface/datasets/blob/90e5bf8a8599b625d6103ee5ac83b98269991141/src/datasets/packaged_modules/json/json.py#L137-L140\n\nMaybe we can pass a Arrow `schema` to avoid errors ?"
] |
2,967,034,060
| 7,495
|
Columns in the dataset obtained though load_dataset do not correspond to the one in the dataset viewer since 3.4.0
|
closed
| 2025-04-02T17:01:11
| 2025-07-02T23:24:57
| 2025-07-02T23:24:57
|
https://github.com/huggingface/datasets/issues/7495
| null |
bruno-hays
| false
|
[
"Hi, the dataset viewer shows all the possible columns and their types, but `load_dataset()` iterates through all the columns that you defined. It seems that you only have one column (‘audio’) defined in your dataset because when I ran `print(ds.column_names)`, the only name I got was “audio”. You need to clearly define all the other features of the dataset as columns to enable your original code to work. Furthermore, you can run this code to print out all the features of your dataset: \n```python\nfrom datasets import load_dataset_builder\nds_builder = load_dataset_builder(\"BrunoHays/Accueil_UBS\")\nprint(ds_builder.info.features)\n```\n",
"@phoebecd \nGood catch, even in datasets<3.4.0, the only feature is \"audio\".\nThis datasets follows the [audio folder](https://huggingface.co/docs/datasets/en/audio_dataset#audiofolder) structure with metadata.csv.\nMaybe I missed something or there is a bug when having and audio_folder with a metadata file\n\nWhat do you think @lhoestq ?",
"I opened a PR to fix the issue :) https://huggingface.co/datasets/BrunoHays/Accueil_UBS/discussions/2\n\nWe expect the metadata file to be in the <split>/ folder now to allow one CSV metadata file per split. But in the PR I just added a manual configuration instead of moving the file and updating all the relative paths it contains."
] |
2,965,347,685
| 7,494
|
Broken links in pdf loading documentation
|
closed
| 2025-04-02T06:45:22
| 2025-04-15T13:36:25
| 2025-04-15T13:36:04
|
https://github.com/huggingface/datasets/issues/7494
| null |
VyoJ
| false
|
[
"thanks for reporting ! I fixed the links, the docs will be updated in the next release"
] |
2,964,025,179
| 7,493
|
push_to_hub does not upload videos
|
open
| 2025-04-01T17:00:20
| 2025-08-01T18:24:24
| null |
https://github.com/huggingface/datasets/issues/7493
| null |
DominikVincent
| false
|
[
"Hi ! the `Video` type is still experimental, and in particular `push_to_hub` doesn't upload videos at the moment (only the paths).\n\nThere is an open question to either upload the videos inside the Parquet files, or rather have them as separate files (which is great to enable remote seeking/streaming)",
"im having the same issue (btw i mistook this to be xet error https://huggingface.co/spaces/xet-team/README/discussions/4 )\n\n@jsulz suggested me to use `upload_folder` but it exceeds hf limits (>10k files per folder and >100k files in total)\n\nfrom my reading of the docs, in my case i have to save as either parquet or webdataset and then use `upload_folder`\n\ni tried `ds.to_parquet(\"...\")` but the parquet file also doesnt contain video, as of `datasets` v4.0\n\nso i think the only workaround for my case is webdataset"
] |
2,959,088,568
| 7,492
|
Closes #7457
|
closed
| 2025-03-30T20:41:20
| 2025-04-13T22:05:07
| 2025-04-13T22:05:07
|
https://github.com/huggingface/datasets/pull/7492
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7492",
"html_url": "https://github.com/huggingface/datasets/pull/7492",
"diff_url": "https://github.com/huggingface/datasets/pull/7492.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7492.patch",
"merged_at": null
}
|
Harry-Yang0518
| true
|
[
"This PR fixes issue #7457"
] |
2,959,085,647
| 7,491
|
docs: update cache.mdx to include HF_DATASETS_CACHE documentation
|
closed
| 2025-03-30T20:35:03
| 2025-03-30T20:36:40
| 2025-03-30T20:36:40
|
https://github.com/huggingface/datasets/pull/7491
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7491",
"html_url": "https://github.com/huggingface/datasets/pull/7491",
"diff_url": "https://github.com/huggingface/datasets/pull/7491.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7491.patch",
"merged_at": null
}
|
Harry-Yang0518
| true
|
[
"Already included HF_DATASETS_CACHE"
] |
2,958,826,222
| 7,490
|
(refactor) remove redundant logic in _check_valid_index_key
|
open
| 2025-03-30T11:45:42
| 2025-03-30T11:50:22
| null |
https://github.com/huggingface/datasets/pull/7490
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7490",
"html_url": "https://github.com/huggingface/datasets/pull/7490",
"diff_url": "https://github.com/huggingface/datasets/pull/7490.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7490.patch",
"merged_at": null
}
|
suzyahyah
| true
|
[] |
2,958,204,763
| 7,489
|
fix: loading of datasets from Disk(#7373)
|
open
| 2025-03-29T16:22:58
| 2025-04-24T16:36:36
| null |
https://github.com/huggingface/datasets/pull/7489
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7489",
"html_url": "https://github.com/huggingface/datasets/pull/7489",
"diff_url": "https://github.com/huggingface/datasets/pull/7489.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7489.patch",
"merged_at": null
}
|
sam-hey
| true
|
[
"@nepfaff Could you confirm if this fixes the issue for you? I checked Memray, and everything looked good on my end.\r\n\r\nInstall: `pip install git+https://github.com/sam-hey/datasets.git@fix/concatenate_datasets`\r\n",
"Will aim to get to this soon. I don't have a rapid testing pipeline setup but need to wait for some AWS nodes to become free",
"I now set up a small experiment:\r\n\r\n```python\r\n# Log initial RAM usage\r\n process = psutil.Process(os.getpid())\r\n initial_ram = process.memory_info().rss / (1024 * 1024) # Convert to MB\r\n logging.info(f\"Initial RAM usage: {initial_ram:.2f} MB\")\r\n\r\n chunk_datasets = [\r\n Dataset.load_from_disk(dataset_path, keep_in_memory=False) for _ in range(N)\r\n ]\r\n combined_dataset = concatenate_datasets(chunk_datasets)\r\n\r\n # Log final RAM usage\r\n final_ram = process.memory_info().rss / (1024 * 1024) # Convert to MB\r\n ram_diff = final_ram - initial_ram\r\n logging.info(f\"Final RAM usage: {final_ram:.2f} MB\")\r\n logging.info(f\"RAM usage increase: {ram_diff:.2f} MB\")\r\n```\r\n\r\nThe RAM usage is linearly correlated with `N` on datasets master!\r\n\r\nFor my test dataset:\r\n- N=5 => RAM usage increase: 26302.91 MB\r\n- N=10 => RAM usage increase: 52315.18 MB\r\n- N=20 => RAM usage increase: 104510.65 MB\r\n- N=40 => RAM usage increase: 209166.30 MB\r\n\r\nUnfortunately, your patch doesn't seem to change this:\r\n```bash\r\npip install git+https://github.com/sam-hey/datasets.git@fix/concatenate_datasets\r\npip list | grep datasets\r\ndatasets 3.5.1.dev0\r\n```\r\nGives exactly the same RAM statistics.\r\n\r\n**Edit:** The results are a bit flawed as the memory increase all seems to come from `Dataset.load_from_disk(dataset_path, keep_in_memory=False)` here (which I don't think should happen either?) and not from `concatenate_datasets`. This seems different from my large-scale setup that runs out of memory during `concatenate_datasets` but I don't seem to be able to replicate this here...",
"Thanks a lot, @nepfaff, for taking a look at this! It seems that `concatenate_datasets()` is fixed with this PR. I can also confirm that loading a large number of files requires significant memory. However, as I understand it, this is expected/a bug since the memory consumption stems from `pa.memory_map()`, which returns a memory-mapped file.\r\n\r\nThis behavior might be related to this bug: https://github.com/apache/arrow/issues/34423 \r\n\r\n<img width=\"1728\" alt=\"Screenshot 2025-04-03 at 16 01 11\" src=\"https://github.com/user-attachments/assets/475691d8-3aba-4d7e-b8ef-5e7552c70b14\" />\r\n",
"Great ! have you tested that it also fixes the memory issue in your case @iamollas ?\r\n\r\nHappy to know that it works for you @sam-hey ! Looking forward to merging this",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7489). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,956,559,358
| 7,488
|
Support underscore int read instruction
|
closed
| 2025-03-28T16:01:15
| 2025-03-28T16:20:44
| 2025-03-28T16:20:43
|
https://github.com/huggingface/datasets/pull/7488
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7488",
"html_url": "https://github.com/huggingface/datasets/pull/7488",
"diff_url": "https://github.com/huggingface/datasets/pull/7488.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7488.patch",
"merged_at": "2025-03-28T16:20:43"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7488). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"you rock, Quentin - thank you!"
] |
2,956,533,448
| 7,487
|
Write pdf in map
|
closed
| 2025-03-28T15:49:25
| 2025-03-28T17:09:53
| 2025-03-28T17:09:51
|
https://github.com/huggingface/datasets/pull/7487
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7487",
"html_url": "https://github.com/huggingface/datasets/pull/7487",
"diff_url": "https://github.com/huggingface/datasets/pull/7487.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7487.patch",
"merged_at": "2025-03-28T17:09:51"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7487). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,954,042,179
| 7,486
|
`shared_datadir` fixture is missing
|
closed
| 2025-03-27T18:17:12
| 2025-03-27T19:49:11
| 2025-03-27T19:49:10
|
https://github.com/huggingface/datasets/issues/7486
| null |
lahwaacz
| false
|
[
"OK I was missing the `pytest-datadir` package. Sorry for the noise!"
] |
2,953,696,519
| 7,485
|
set dev version
|
closed
| 2025-03-27T16:39:34
| 2025-03-27T16:41:59
| 2025-03-27T16:39:42
|
https://github.com/huggingface/datasets/pull/7485
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7485",
"html_url": "https://github.com/huggingface/datasets/pull/7485",
"diff_url": "https://github.com/huggingface/datasets/pull/7485.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7485.patch",
"merged_at": "2025-03-27T16:39:42"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7485). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,953,677,168
| 7,484
|
release: 3.5.0
|
closed
| 2025-03-27T16:33:27
| 2025-03-27T16:35:44
| 2025-03-27T16:34:22
|
https://github.com/huggingface/datasets/pull/7484
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7484",
"html_url": "https://github.com/huggingface/datasets/pull/7484",
"diff_url": "https://github.com/huggingface/datasets/pull/7484.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7484.patch",
"merged_at": "2025-03-27T16:34:22"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7484). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,951,856,468
| 7,483
|
Support skip_trying_type
|
closed
| 2025-03-27T07:07:20
| 2025-04-29T04:14:57
| 2025-04-09T09:53:10
|
https://github.com/huggingface/datasets/pull/7483
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7483",
"html_url": "https://github.com/huggingface/datasets/pull/7483",
"diff_url": "https://github.com/huggingface/datasets/pull/7483.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7483.patch",
"merged_at": "2025-04-09T09:53:10"
}
|
yoshitomo-matsubara
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7483). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Cool ! Can you run `make style` to fix code formatting ?\r\n\r\nI was also thinking of naming the argument `try_original_type` and have it `True` by default",
"@lhoestq \r\n\r\nThank you for the suggestion! I renamed the argument with `True` by default and ran `make style`\r\nDoes it look good?",
"Thanks @lhoestq !\r\n\r\nLet me know if there are anything that I can do for this PR. Otherwise, looking forward to seeing this update in the package soon!",
"CI failures are unrelated, merging :)",
"Great, thanks for your support!\r\nI can't wait for the next release :)"
] |
2,950,890,368
| 7,482
|
Implement capability to restore non-nullability in Features
|
closed
| 2025-03-26T22:16:09
| 2025-05-15T15:00:59
| 2025-05-15T15:00:59
|
https://github.com/huggingface/datasets/pull/7482
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7482",
"html_url": "https://github.com/huggingface/datasets/pull/7482",
"diff_url": "https://github.com/huggingface/datasets/pull/7482.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7482.patch",
"merged_at": null
}
|
BramVanroy
| true
|
[
"Interestingly, this does not close #7479. The Features are not correctly maintained when calling `from_dict` with the custom Features.",
"Unfortunately this PR does not fix the reported issue. After more digging:\r\n\r\n- when the dataset is created, nullability information is lost in Features;\r\n- even with this PR, it will get lost eventually because of internal copying/recreation of the Features object without accounting for the nullable fields;\r\n- even if that is also fixed, and Features.arrow_schema correctly holds the nullability info, [casting the arrow Table](https://github.com/huggingface/datasets/blob/5f8d2ad9a1b0bccfd962d998987228addfd5be9f/src/datasets/arrow_dataset.py#L677) with a less strict schema to a more strict one (with nullability) will fail (only on deeper structs, not on flat fields). \r\n\r\nInterestingly, passing custom Features does not immediately load the underlying data with the right arrow_schema. Instead, the workflow is like this:\r\n\r\n- load pyarrow table with any of the methods (from_dict, from_pandas, etc.), which will always AUTO INFER rather than use a provided schema\r\n- the loaded table with auto-schema will be used to initialize the `Dataset` class, and only during construction will [CAST](https://github.com/huggingface/datasets/blob/5f8d2ad9a1b0bccfd962d998987228addfd5be9f/src/datasets/arrow_dataset.py#L677) the table to the user-provided schema if needed, if it differs from the auto-inferred one.\r\n\r\nSo I figured, since many/all of the pyarrow [`Table.from_*`](https://arrow.apache.org/docs/python/generated/pyarrow.Table.html) methods have a `schema=` argument, we should already load the Table with the correct schema to begin with. As an example, I tried changing this line:\r\n\r\nhttps://github.com/huggingface/datasets/blob/5f8d2ad9a1b0bccfd962d998987228addfd5be9f/src/datasets/arrow_dataset.py#L940\r\n\r\nto include the arrow_schema, if provided:\r\n\r\n```python\r\npa_table = InMemoryTable.from_pydict(mapping=mapping, schema=features.arrow_schema if features is not None else None)\r\n```\r\n\r\nBut that leads to:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/ampere/vanroy/datasets/scratch.py\", line 33, in <module>\r\n ds = Dataset.from_dict(\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/home/local/vanroy/datasets/src/datasets/arrow_dataset.py\", line 957, in from_dict\r\n pa_table = InMemoryTable.from_pydict(mapping=mapping, schema=features.arrow_schema if features is not None else None)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/local/vanroy/datasets/src/datasets/table.py\", line 758, in from_pydict\r\n return cls(pa.Table.from_pydict(*args, **kwargs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"pyarrow/table.pxi\", line 1968, in pyarrow.lib._Tabular.from_pydict\r\n File \"pyarrow/table.pxi\", line 6354, in pyarrow.lib._from_pydict\r\n File \"pyarrow/array.pxi\", line 402, in pyarrow.lib.asarray\r\n File \"pyarrow/array.pxi\", line 252, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 114, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/home/local/vanroy/datasets/src/datasets/arrow_writer.py\", line 201, in __arrow_array__\r\n raise ValueError(\"TypedSequence is supposed to be used with pa.array(typed_sequence, type=None)\")\r\nValueError: TypedSequence is supposed to be used with pa.array(typed_sequence, type=None)\r\n```\r\n\r\nand I am not too familiar with pyarrow to solve this.\r\n\r\nSo ultimately I'm a bit at a loss here. I *think*, if we'd want to do this right, the automatic casting in init should be removed in favor of handling the logic inside `Dataset.from_*`, by passing the schema explicitly to `pa.Table.from_*(..., schema=schema)`. But I lack the knowledge of pyarrow to go further than what I've written about above.\r\n",
"It's indeed a bit more work to support nullable since in addition to your comments, there are unclear behavior when it comes to concatenating nullable with non-nullable, and maybe how to handle non-nullable lists and nested data.\r\n\r\nBut yup I agree having the `Dataset.from_*` function pass the `schema` to the `pa.Table.from*` would be the way.\r\n\r\nJust one comment about this error: \r\n\r\n```\r\nValueError: TypedSequence is supposed to be used with pa.array(typed_sequence, type=None)\r\n```\r\n\r\nThis happens because `Dataset.from_dict` uses `OptimizedTypedSequence` by default, which should only be used if the user doesn't specify a schema"
] |
2,950,692,971
| 7,481
|
deal with python `10_000` legal number in slice syntax
|
closed
| 2025-03-26T20:10:54
| 2025-03-28T16:20:44
| 2025-03-28T16:20:44
|
https://github.com/huggingface/datasets/issues/7481
| null |
sfc-gh-sbekman
| false
|
[
"should be an easy fix, I opened a PR"
] |
2,950,315,214
| 7,480
|
HF_DATASETS_CACHE ignored?
|
open
| 2025-03-26T17:19:34
| 2025-04-28T10:16:16
| null |
https://github.com/huggingface/datasets/issues/7480
| null |
stephenroller
| false
|
[
"FWIW, it does eventually write to /tmp/roller/datasets when generating the final version.",
"Hey, I’d love to work on this issue but I am a beginner, can I work it with you?",
"Hi @lhoestq,\nI'd like to look into this issue but I'm still learning. Could you share any quick pointers on the HF_DATASETS_CACHE behavior here? Thanks!",
"Hi ! `HF_DATASETS_CACHE` is only for the cache files of the `datasets` library, not for the `huggingface_hub` cache for files downloaded from the Hugging Face Hub.\n\nYou should either specify `HF_HOME` (parent cache path for everything HF) or both `HF_DATASETS_CACHE` and `HF_HUB_CACHE`",
"\n\nThanks for clarifying, @lhoestq! To make sure I’ve got it right:\n\n1. **HF_DATASETS_CACHE** only controls where the **datasets** library writes its own cache files (e.g. processed shards, Arrow files, etc.).\n2. Anything downloaded via **huggingface_hub** (models, tokenizers, raw files) still goes into the Hub cache (by default `~/.cache/huggingface/hub`), unless you set **HF_HUB_CACHE** or the parent **HF_HOME**.\n\nSo if you want everything off NFS and onto local disk you have two options:\n\n- **Set both** \n ```bash\n export HF_DATASETS_CACHE=/tmp/roller/datasets \n export HF_HUB_CACHE=/tmp/roller/hub\n ```\n- **Or set** \n ```bash\n export HF_HOME=/tmp/roller\n ```\n which will apply to both subdirectories.\n\nIs that correct? And would it make sense to add a note to the docs clarifying the distinction (or even support S3 for the Hub cache in the future)? I’m happy to draft a small docs PR if that would help.",
"Hi, yes that's correct, thanks for the clarification ! A note in the docs would be welcome, thanks"
] |
2,950,235,396
| 7,479
|
Features.from_arrow_schema is destructive
|
open
| 2025-03-26T16:46:43
| 2025-03-26T16:46:58
| null |
https://github.com/huggingface/datasets/issues/7479
| null |
BramVanroy
| false
|
[] |
2,948,993,461
| 7,478
|
update fsspec 2025.3.0
|
closed
| 2025-03-26T09:53:05
| 2025-03-28T19:15:54
| 2025-03-28T15:51:55
|
https://github.com/huggingface/datasets/pull/7478
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7478",
"html_url": "https://github.com/huggingface/datasets/pull/7478",
"diff_url": "https://github.com/huggingface/datasets/pull/7478.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7478.patch",
"merged_at": "2025-03-28T15:51:54"
}
|
peteski22
| true
|
[
"Sorry for tagging you @lhoestq but since you merged the linked PR, I wondered if you might be able to help me get this triaged so it can be reviewed/rejected etc. 🙏🏼 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7478). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,947,169,460
| 7,477
|
What is the canonical way to compress a Dataset?
|
open
| 2025-03-25T16:47:51
| 2025-04-03T09:13:11
| null |
https://github.com/huggingface/datasets/issues/7477
| null |
eric-czech
| false
|
[
"I saw this post by @lhoestq: https://discuss.huggingface.co/t/increased-arrow-table-size-by-factor-of-2/26561/4 suggesting that there is at least some internal code for writing sharded parquet datasets non-concurrently. This appears to be that code: https://github.com/huggingface/datasets/blob/94ccd1b4fada8a92cea96dc8df4e915041d695b6/src/datasets/arrow_dataset.py#L5380-L5397\n\nIs there any fundamental reason (e.g. race conditions) that this kind of operation couldn't exist as a utility or method on a `Dataset` with a `num_proc` argument? I am not seeing any other issues explicitly for that ask. \n",
"We simply haven't implemented a method to save as sharded parquet locally yet ^^'\n\nRight now the only sharded parquet export method is `push_to_hub()` which writes to HF. But we can have a local one as well. \n\nIn the meantime the easiest way to export as sharded parquet locally is to `.shard()` and `.to_parquet()` (see code from my comment [here](https://github.com/huggingface/datasets/issues/7047#issuecomment-2233163406))",
"> In the meantime the easiest way to export as sharded parquet locally is to .shard() and .to_parquet()\n\nMakes sense, BUT how can it be done concurrently? I could of course use multiprocessing myself or a dozen other libraries for parallelizing single-node/local operations like that.\n\nWhat I'm asking though is, what is the way to do this that is most canonical for `datasets` specifically? I.e. what is least likely to causing pickling or other issues because it is used frequently internally by `datasets` and already likely tests for a lot of library-native edge-cases?",
"Everything in `datasets` is picklable :) and even better: since the data are memory mapped from disk, pickling in one process and unpickling in another doesn't do any copy - it instantaneously reloads the memory map.\n\nSo feel free to use the library you prefer to parallelize your operations.\n\n(it's another story in distributed setups though, because in that case you either need to copy and send the data or setup a distributed filesystem)"
] |
2,946,997,924
| 7,476
|
Priotitize json
|
closed
| 2025-03-25T15:44:31
| 2025-03-25T15:47:00
| 2025-03-25T15:45:00
|
https://github.com/huggingface/datasets/pull/7476
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7476",
"html_url": "https://github.com/huggingface/datasets/pull/7476",
"diff_url": "https://github.com/huggingface/datasets/pull/7476.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7476.patch",
"merged_at": "2025-03-25T15:45:00"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7476). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,946,640,570
| 7,475
|
IterableDataset's state_dict shard_example_idx is always equal to the number of samples in a shard
|
closed
| 2025-03-25T13:58:07
| 2025-05-06T14:22:19
| 2025-05-06T14:05:07
|
https://github.com/huggingface/datasets/issues/7475
| null |
bruno-hays
| false
|
[
"Hey, I’d love to work on this issue but I am a beginner, can I work it with you?",
"Hello. I'm sorry but I don't have much time to get in the details for now.\nHave you managed to reproduce the issue with the code provided ?\nIf you want to work on it, you can self-assign and ask @lhoestq for directions",
"Hi Bruno, I am trying to reproduce it this later in this week and let you know what I found.",
"#self-assign",
"Good catch, I tried and if the dataset is bigger (e.g. `range(9999)`) it returns `\"shard_example_idx\": 1000` with is the `config.DEFAULT_MAX_BATCH_SIZE`\n\nhttps://github.com/huggingface/datasets/blob/94ccd1b4fada8a92cea96dc8df4e915041d695b6/src/datasets/arrow_dataset.py#L5313-L5317\n\nIt looks like the state_dict is incorrect in that case, it should account for this and use the `RebatchedArrowExamplesIterable` which buffers the batch of 1000 rows and counts the iteration within the batch in the state_dict",
"\nHello @lhoestq,\n\nI’ve been debugging the `IterableDataset.state_dict()` behavior and applied a patch to `ArrowExamplesIterable._iter_arrow()` in an attempt to fix the issue described in #7475—specifically, that `shard_example_idx` always equals the number of samples in the shard, even if only a few examples have been consumed.\n\n### What I Tried\n\nI updated `_iter_arrow` to slice off already-consumed rows and increment the state only by the number of actual examples yielded, like this:\n\n```python\nclass ArrowExamplesIterable(_BaseExamplesIterable):\n # ... __init__ and _init_state_dict as before ...\n\n def _iter_arrow(self):\n shard_idx_start = self._state_dict[\"shard_idx\"] if self._state_dict else 0\n\n for gen_kwargs in islice(\n _split_gen_kwargs(self.kwargs, max_num_jobs=self.num_shards),\n shard_idx_start, None\n ):\n shard_example_idx_start = self._state_dict[\"shard_example_idx\"] if self._state_dict else 0\n shard_example_idx = 0\n\n for key, pa_table in self.generate_tables_fn(**gen_kwargs):\n num_rows = len(pa_table)\n next_idx = shard_example_idx + num_rows\n\n if next_idx <= shard_example_idx_start:\n shard_example_idx = next_idx\n continue\n\n offset = max(0, shard_example_idx_start - shard_example_idx)\n sliced_table = pa_table.slice(offset)\n\n if self._state_dict:\n self._state_dict[\"shard_example_idx\"] += len(sliced_table)\n\n yield key, sliced_table\n shard_example_idx = next_idx\n\n if self._state_dict:\n self._state_dict[\"shard_idx\"] += 1\n self._state_dict[\"shard_example_idx\"] = 0\n```\n\nI verified that the updated code was being used, and I added debug prints to confirm the table slicing and counter updates.\n\n### The Issue Still Exists\n\nDespite the changes, the behavior remains the same. Running this minimal repro:\n\n```python\nds = Dataset.from_dict({\"a\": range(6)}).to_iterable_dataset(num_shards=1)\nfor idx, example in enumerate(ds):\n print(example)\n if idx == 2:\n print(\"checkpoint\")\n print(ds.state_dict())\n break\n```\n\nStill outputs:\n\n```bash\n{'a': 0}\n{'a': 1}\n{'a': 2}\ncheckpoint\n{'examples_iterable': {'shard_idx': 0, 'shard_example_idx': 6, 'type': 'ArrowExamplesIterable'}, 'epoch': 0}\n```\n\nEven though only 3 examples were consumed, `shard_example_idx` jumps to 6.\n\n### Questions\n\n- Could there be another place (e.g., in `__iter__`, `RebatchedArrowExamplesIterable`, or the `IterableDataset` wrapper) that's still using the old logic and overriding the state?\n- Is there a better location to intercept and count yielded examples?\n- Would you recommend tracking a new `true_example_idx` to avoid modifying existing behavior?\n\nLet me know your thoughts—happy to iterate further and submit a PR once we align on the right approach. Thanks again for your help and feedback!",
"I found a fix using RebatchedArrowExamplesIterable, let me know if it's all good for you now",
"Hi @lhoestq, thanks for the quick fix and for referencing RebatchedArrowExamplesIterable! 🙌\n\nI just tested your patch locally and can confirm that shard_example_idx is now tracking correctly when only a subset of examples is consumed. This resolves the issue I was seeing in #7475.\n\nReally appreciate the guidance earlier on where to look—it was a great learning opportunity. If there are other parts of the IterableDataset internals that could use cleanup or testing, I’d be happy to help."
] |
2,945,066,258
| 7,474
|
Remove conditions for Python < 3.9
|
closed
| 2025-03-25T03:08:04
| 2025-04-16T00:11:06
| 2025-04-15T16:07:55
|
https://github.com/huggingface/datasets/pull/7474
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7474",
"html_url": "https://github.com/huggingface/datasets/pull/7474",
"diff_url": "https://github.com/huggingface/datasets/pull/7474.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7474.patch",
"merged_at": "2025-04-15T16:07:54"
}
|
cyyever
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7474). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks ! can you run `make style` to fix code formatting ? then we can merge",
"@lhoestq Done"
] |
2,939,034,643
| 7,473
|
Webdataset data format problem
|
closed
| 2025-03-21T17:23:52
| 2025-03-21T19:19:58
| 2025-03-21T19:19:58
|
https://github.com/huggingface/datasets/issues/7473
| null |
edmcman
| false
|
[
"I was able to work around it"
] |
2,937,607,272
| 7,472
|
Label casting during `map` process is canceled after the `map` process
|
closed
| 2025-03-21T07:56:22
| 2025-04-10T05:11:15
| 2025-04-10T05:11:14
|
https://github.com/huggingface/datasets/issues/7472
| null |
yoshitomo-matsubara
| false
|
[
"Hi ! By default `map()` tries to keep the types of each column of the dataset, so here it reuses the int type since all your float values can be converted to integers. But I agree it would be nice to store float values as float values and don't try to reuse the same type in this case.\n\nIn the meantime, you can either store the float values in a new column, or pass the output `features=` manually to `map()`",
"Hi @lhoestq \n\nThank you for the answer & suggestion!\n\nCan we add some flag to `map()` function like `reuses_original_type=True` and skip reusing the original type when it's False?\n\nLet me know if it sounds like a reasonable solution. I am happy to submit a PR for this.",
"In general we try to avoid adding new parameters when it's already possible to achieve the same results with existing parameters (here `features=`). But since it's not always convenient to know in advance the `features=` I'm open to contributions to adding this parameter yes",
"Thank you for sharing the context. Good to know that. \n\nI submitted a PR #7483. Could you review the PR?",
"Hi @lhoestq \n\nLet me know if there is something that I should add to [the PR](https://github.com/huggingface/datasets/pull/7483)!",
"Closing this issue as the PR #7483 was merged"
] |
2,937,530,069
| 7,471
|
Adding argument to `_get_data_files_patterns`
|
closed
| 2025-03-21T07:17:53
| 2025-03-27T12:30:52
| 2025-03-26T07:26:27
|
https://github.com/huggingface/datasets/issues/7471
| null |
SangbumChoi
| false
|
[
"Hi ! The pattern can be specified in advance in YAML in the README.md of the dataset :)\n\nFor example\n\n```\n---\nconfigs:\n- config_name: default\n data_files:\n - split: train\n path: \"train/*\"\n - split: test\n path: \"test/*\"\n---\n```\n\nSee the docs at https://huggingface.co/docs/hub/en/datasets-manual-configuration",
"@lhoestq How can we choose in this case ? https://huggingface.co/datasets/datasets-examples/doc-image-5\n",
"choose what ? sorry I didn't get it ^^'"
] |
2,937,236,323
| 7,470
|
Is it possible to shard a single-sharded IterableDataset?
|
closed
| 2025-03-21T04:33:37
| 2025-05-09T22:51:46
| 2025-03-26T06:49:28
|
https://github.com/huggingface/datasets/issues/7470
| null |
jonathanasdf
| false
|
[
"Hi ! Maybe you can look for an option in your dataset to partition your data based on a deterministic filter ? For example each worker could stream the data based on `row.id % num_shards` or something like that ?",
"So the recommendation is to start out with multiple shards initially and re-sharding after is not expected to work? :(\n\nWould something like the following work? Some DiskCachingIterableDataset, where worker 0 streams from the datasource, but also writes to disk, and all of the other workers read from what worker 0 wrote? Then that would produce a stream with a deterministic order and we can subsample.",
"To be honest it would be cool to support native multiprocessing in `IterableDataset.map` so you can parallelize any specific processing step without having to rely on a torch Dataloader. What do you think ?\n\nrelated: https://github.com/huggingface/datasets/issues/7193 https://github.com/huggingface/datasets/issues/3444 \noriginal issue: https://github.com/huggingface/datasets/issues/2642\n\nAlternatively the DiskCachingIterableDataset idea works, just note that to make it work with a torch Dataloader with num_workers>0 you'll need:\n1. to make your own `torch.utils.data.IterableDataset` and have rank=0 stream the data and share them with the other workers (either via disk as suggested or IPC)\n2. take into account that`datasets.IterableDataset` will yield 0 examples for ranks with id>0 if there is only one shard, but in your case it's ok since you'd only stream from rank=0",
"Ohh that would be pretty cool!\n\nThanks for the suggestions, as there's no actionable items for this repo I'm going to close this issue now.",
"Another usecase for this resharding:\n\nIf we have a bunch of jsonl files, and we load it as an IterableDataset with multiple dataloader workers, each file gets naively assigned to a worker.\n\nIf the files were not carefully produced to be equally sized, eg if the very last file is significantly shorter, containing just a few examples, and it gets assigned onto a dataloader worker by itself, then the examples in that file will be significantly oversampled.\n\nIt would be nice if datasets had an internal way to rebalance this without requiring offline reprocessing of the data files"
] |
2,936,606,080
| 7,469
|
Custom split name with the web interface
|
closed
| 2025-03-20T20:45:59
| 2025-03-21T07:20:37
| 2025-03-21T07:20:37
|
https://github.com/huggingface/datasets/issues/7469
| null |
vince62s
| false
|
[] |
2,934,094,103
| 7,468
|
function `load_dataset` can't solve folder path with regex characters like "[]"
|
open
| 2025-03-20T05:21:59
| 2025-03-25T10:18:12
| null |
https://github.com/huggingface/datasets/issues/7468
| null |
Hpeox
| false
|
[
"Hi ! Have you tried escaping the glob special characters `[` and `]` ?\n\nbtw note that`AbstractFileSystem.glob` doesn't support regex, instead it supports glob patterns as in the python library [glob](https://docs.python.org/3/library/glob.html)\n"
] |
2,930,067,107
| 7,467
|
load_dataset with streaming hangs on parquet datasets
|
open
| 2025-03-18T23:33:54
| 2025-03-25T10:28:04
| null |
https://github.com/huggingface/datasets/issues/7467
| null |
The0nix
| false
|
[
"Hi ! The issue comes from `pyarrow`, I reported it here: https://github.com/apache/arrow/issues/45214 (feel free to comment / thumb up).\n\nAlternatively we can try to find something else than `ParquetFileFragment.to_batches()` to iterate on Parquet data and keep the option the pass `filters=`..."
] |
2,928,661,327
| 7,466
|
Fix local pdf loading
|
closed
| 2025-03-18T14:09:06
| 2025-03-18T14:11:52
| 2025-03-18T14:09:21
|
https://github.com/huggingface/datasets/pull/7466
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7466",
"html_url": "https://github.com/huggingface/datasets/pull/7466",
"diff_url": "https://github.com/huggingface/datasets/pull/7466.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7466.patch",
"merged_at": "2025-03-18T14:09:21"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7466). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,926,478,838
| 7,464
|
Minor fix for metadata files in extension counter
|
closed
| 2025-03-17T21:57:11
| 2025-03-18T15:21:43
| 2025-03-18T15:21:41
|
https://github.com/huggingface/datasets/pull/7464
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7464",
"html_url": "https://github.com/huggingface/datasets/pull/7464",
"diff_url": "https://github.com/huggingface/datasets/pull/7464.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7464.patch",
"merged_at": "2025-03-18T15:21:41"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7464). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,925,924,452
| 7,463
|
Adds EXR format to store depth images in float32
|
open
| 2025-03-17T17:42:40
| 2025-04-02T12:33:39
| null |
https://github.com/huggingface/datasets/pull/7463
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7463",
"html_url": "https://github.com/huggingface/datasets/pull/7463",
"diff_url": "https://github.com/huggingface/datasets/pull/7463.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7463.patch",
"merged_at": null
}
|
ducha-aiki
| true
|
[
"Hi ! I'mn wondering if this shouldn't this be an `Image()` type and decoded as a `PIL.Image` ?\r\n\r\nThis would make it easier to integrate with the rest of the HF ecosystem, and you could still get a numpy array using `ds = ds.with_format(\"numpy\")` which sets all the images to be formatted as numpy arrays",
"@lhoestq do you mean to add the decoder, and exr extension to the image format? Yes, that probably would be better ",
"yes exactly"
] |
2,925,612,945
| 7,462
|
set dev version
|
closed
| 2025-03-17T16:00:53
| 2025-03-17T16:03:31
| 2025-03-17T16:01:08
|
https://github.com/huggingface/datasets/pull/7462
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7462",
"html_url": "https://github.com/huggingface/datasets/pull/7462",
"diff_url": "https://github.com/huggingface/datasets/pull/7462.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7462.patch",
"merged_at": "2025-03-17T16:01:08"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7462). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,925,608,123
| 7,461
|
List of images behave differently on IterableDataset and Dataset
|
closed
| 2025-03-17T15:59:23
| 2025-03-18T08:57:17
| 2025-03-18T08:57:16
|
https://github.com/huggingface/datasets/issues/7461
| null |
FredrikNoren
| false
|
[
"Hi ! Can you try with `datasets` ^3.4 released recently ? on my side it works with IterableDataset on the recent version :)\n\n```python\nIn [20]: def train_iterable_gen():\n ...: images = np.array(load_image(\"https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg\").resize((128, 128)))\n ...: yield {\n ...: \"images\": np.expand_dims(images, axis=0),\n ...: \"messages\": [\n ...: {\n ...: \"role\": \"user\",\n ...: \"content\": [{\"type\": \"image\", \"url\": \"https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg\" }]\n ...: },\n ...: {\n ...: \"role\": \"assistant\",\n ...: \"content\": [{\"type\": \"text\", \"text\": \"duck\" }]\n ...: }\n ...: ]\n ...: }\n ...: \n ...: train_ds = IterableDataset.from_generator(train_iterable_gen,\n ...: features=Features({\n ...: 'images': [datasets.Image(mode=None, decode=True, id=None)],\n ...: 'messages': [{'content': [{'text': datasets.Value(dtype='string', id=None), 'type': datasets.Value(dtype='string', id=None) }],\n ...: 'role': datasets.Value(dtype='string', id=None)}]\n ...: } )\n ...: )\n\n\nIn [21]: \n\nIn [21]: next(iter(train_ds))\n/Users/quentinlhoest/hf/datasets/src/datasets/features/image.py:338: UserWarning: Downcasting array dtype int64 to uint8 to be compatible with 'Pillow'\n warnings.warn(f\"Downcasting array dtype {dtype} to {dest_dtype} to be compatible with 'Pillow'\")\nOut[21]: \n{'images': [<PIL.PngImagePlugin.PngImageFile image mode=RGB size=128x128>],\n 'messages': [{'content': [{'text': None, 'type': 'image'}], 'role': 'user'},\n {'content': [{'type': 'text', 'text': 'duck'}], 'role': 'assistant'}]}\n```",
"Hm I tried it here and it works as expected, even on datasets 3.3.2. I guess maybe something in the SFTTrainer is doing additional processing on the dataset, I'll have a look there.\n\nThanks @lhoestq!"
] |
2,925,605,865
| 7,460
|
release: 3.4.1
|
closed
| 2025-03-17T15:58:31
| 2025-03-17T16:01:14
| 2025-03-17T15:59:19
|
https://github.com/huggingface/datasets/pull/7460
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7460",
"html_url": "https://github.com/huggingface/datasets/pull/7460",
"diff_url": "https://github.com/huggingface/datasets/pull/7460.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7460.patch",
"merged_at": "2025-03-17T15:59:19"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7460). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,925,491,766
| 7,459
|
Fix data_files filtering
|
closed
| 2025-03-17T15:20:21
| 2025-03-17T15:25:56
| 2025-03-17T15:25:54
|
https://github.com/huggingface/datasets/pull/7459
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7459",
"html_url": "https://github.com/huggingface/datasets/pull/7459",
"diff_url": "https://github.com/huggingface/datasets/pull/7459.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7459.patch",
"merged_at": "2025-03-17T15:25:53"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7459). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,925,403,528
| 7,458
|
Loading the `laion/filtered-wit` dataset in streaming mode fails on v3.4.0
|
closed
| 2025-03-17T14:54:02
| 2025-03-17T16:02:04
| 2025-03-17T15:25:55
|
https://github.com/huggingface/datasets/issues/7458
| null |
nikita-savelyevv
| false
|
[
"thanks for reporting, I released 3.4.1 with a fix"
] |
2,924,886,467
| 7,457
|
Document the HF_DATASETS_CACHE env variable
|
closed
| 2025-03-17T12:24:50
| 2025-05-06T15:54:39
| 2025-05-06T15:54:39
|
https://github.com/huggingface/datasets/issues/7457
| null |
LSerranoPEReN
| false
|
[
"Strongly agree to this, in addition, I am also suffering to change the cache location similar to other issues (since I changed the environmental variables).\nhttps://github.com/huggingface/datasets/issues/6886",
"`HF_DATASETS_CACHE` should be documented there indeed, feel free to open a PR :) ",
"Hey, I’d love to work on this issue! Could you assign it to me?",
"sure ! you can also comment #self-assign in an issue and a bot assigns you automatically :)"
] |
2,922,676,278
| 7,456
|
.add_faiss_index and .add_elasticsearch_index returns ImportError at Google Colab
|
open
| 2025-03-16T00:51:49
| 2025-03-17T15:57:19
| null |
https://github.com/huggingface/datasets/issues/7456
| null |
MapleBloom
| false
|
[
"I can fix this.\nIt's mainly because faiss-gpu requires python<=3.10 but the default python version in colab is 3.11. We just have to downgrade the CPython version down to 3.10 and it should work fine.\n",
"I think I just had no chance to meet with faiss-cpu.\nIt could be import problem? \n_has_faiss gets its value at the beginning of datasets/search.\nI tried to call object before import faiss, so _has_faiss took False. And never updated later. ",
"Yes you can't meet the requirements because faiss-cpu runs only on\r\npython3.10 and lower but the default version for colab is python3.11 which\r\nresults in pip not being able to find wheels for faiss-cpu with python3.11.\r\n\r\nOn Mon, 17 Mar, 2025, 3:56 pm MapleBloom, ***@***.***> wrote:\r\n\r\n> I think I just had no chance to meet with faiss-cpu.\r\n> It could be import problem?\r\n> _has_faiss gets its value at the beginning of datasets/search.\r\n> I tried to call object before import faiss, so _has_faiss took False. And\r\n> never updated later.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/7456#issuecomment-2728975672>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AVUSZMBVD7LEDDUGALOTVN32U2PMBAVCNFSM6AAAAABZDBA426VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMRYHE3TKNRXGI>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n> [image: MapleBloom]*MapleBloom* left a comment (huggingface/datasets#7456)\r\n> <https://github.com/huggingface/datasets/issues/7456#issuecomment-2728975672>\r\n>\r\n> I think I just had no chance to meet with faiss-cpu.\r\n> It could be import problem?\r\n> _has_faiss gets its value at the beginning of datasets/search.\r\n> I tried to call object before import faiss, so _has_faiss took False. And\r\n> never updated later.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/7456#issuecomment-2728975672>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AVUSZMBVD7LEDDUGALOTVN32U2PMBAVCNFSM6AAAAABZDBA426VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMRYHE3TKNRXGI>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"> you can't meet the requirements\n\nIt is not the case (or I didn't reach this point) because the same code in notebook\n```importlib.util.find_spec(\"faiss\")```\nfinds faiss. I've mention it.\nI think the problem is in the very moment when _has_faiss takes its value and never try again. \n(or it couldn't find the path that was easily found when started from my code)",
"When you run the first cell containing pip install faiss-cpu does it\r\ninstall it?\r\n\r\nOn Mon, 17 Mar, 2025, 8:01 pm MapleBloom, ***@***.***> wrote:\r\n\r\n> you can't meet the requirements\r\n>\r\n> It is not the case (or I didn't reach this point) because the same code in\r\n> notebook\r\n> importlib.util.find_spec(\"faiss\")\r\n> finds faiss. I've mention it.\r\n> I think the problem is in the very moment when _has_faiss takes its value\r\n> and never try again.\r\n> (or it couldn't find the path that was easily found when started from my\r\n> code)\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/7456#issuecomment-2729737414>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AVUSZMCCE6BPZCOVAWXKIY32U3MFVAVCNFSM6AAAAABZDBA426VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMRZG4ZTONBRGQ>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n> [image: MapleBloom]*MapleBloom* left a comment (huggingface/datasets#7456)\r\n> <https://github.com/huggingface/datasets/issues/7456#issuecomment-2729737414>\r\n>\r\n> you can't meet the requirements\r\n>\r\n> It is not the case (or I didn't reach this point) because the same code in\r\n> notebook\r\n> importlib.util.find_spec(\"faiss\")\r\n> finds faiss. I've mention it.\r\n> I think the problem is in the very moment when _has_faiss takes its value\r\n> and never try again.\r\n> (or it couldn't find the path that was easily found when started from my\r\n> code)\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/7456#issuecomment-2729737414>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AVUSZMCCE6BPZCOVAWXKIY32U3MFVAVCNFSM6AAAAABZDBA426VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDOMRZG4ZTONBRGQ>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"> When you run the first cell containing pip install faiss-cpu does it\n> install it?\n> […](#)\n\nYes. It was installed succesfully. \nMethods of datasets library that depends on _has_faiss constant didn't start to work."
] |
2,921,933,250
| 7,455
|
Problems with local dataset after upgrade from 3.3.2 to 3.4.0
|
open
| 2025-03-15T09:22:50
| 2025-03-17T16:20:43
| null |
https://github.com/huggingface/datasets/issues/7455
| null |
andjoer
| false
|
[
"Hi ! I just released 3.4.1 with a fix, let me know if it's working now !"
] |
2,920,760,793
| 7,454
|
set dev version
|
closed
| 2025-03-14T16:48:19
| 2025-03-14T16:50:31
| 2025-03-14T16:48:28
|
https://github.com/huggingface/datasets/pull/7454
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7454",
"html_url": "https://github.com/huggingface/datasets/pull/7454",
"diff_url": "https://github.com/huggingface/datasets/pull/7454.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7454.patch",
"merged_at": "2025-03-14T16:48:28"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7454). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,920,719,503
| 7,453
|
release: 3.4.0
|
closed
| 2025-03-14T16:30:45
| 2025-03-14T16:38:10
| 2025-03-14T16:38:08
|
https://github.com/huggingface/datasets/pull/7453
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7453",
"html_url": "https://github.com/huggingface/datasets/pull/7453",
"diff_url": "https://github.com/huggingface/datasets/pull/7453.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7453.patch",
"merged_at": "2025-03-14T16:38:08"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7453). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,920,354,783
| 7,452
|
minor docs changes
|
closed
| 2025-03-14T14:14:04
| 2025-03-14T14:16:38
| 2025-03-14T14:14:20
|
https://github.com/huggingface/datasets/pull/7452
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7452",
"html_url": "https://github.com/huggingface/datasets/pull/7452",
"diff_url": "https://github.com/huggingface/datasets/pull/7452.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7452.patch",
"merged_at": "2025-03-14T14:14:20"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7452). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,919,835,663
| 7,451
|
Fix resuming after `ds.set_epoch(new_epoch)`
|
closed
| 2025-03-14T10:31:25
| 2025-03-14T10:50:11
| 2025-03-14T10:50:09
|
https://github.com/huggingface/datasets/pull/7451
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7451",
"html_url": "https://github.com/huggingface/datasets/pull/7451",
"diff_url": "https://github.com/huggingface/datasets/pull/7451.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7451.patch",
"merged_at": "2025-03-14T10:50:09"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7451). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,916,681,414
| 7,450
|
Add IterableDataset.decode with multithreading
|
closed
| 2025-03-13T10:41:35
| 2025-03-14T10:35:37
| 2025-03-14T10:35:35
|
https://github.com/huggingface/datasets/pull/7450
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7450",
"html_url": "https://github.com/huggingface/datasets/pull/7450",
"diff_url": "https://github.com/huggingface/datasets/pull/7450.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7450.patch",
"merged_at": "2025-03-14T10:35:35"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7450). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,916,235,092
| 7,449
|
Cannot load data with different schemas from different parquet files
|
closed
| 2025-03-13T08:14:49
| 2025-03-17T07:27:48
| 2025-03-17T07:27:46
|
https://github.com/huggingface/datasets/issues/7449
| null |
li-plus
| false
|
[
"Hi ! `load_dataset` expects all the data_files to have the same schema.\n\nMaybe you can try enforcing certain `features` using:\n\n```python\nfeatures = Features({\"conversations\": {'content': Value('string'), 'role': Value('string',)}})\nds = load_dataset(..., features=features)\n```",
"Thanks! It works if I explicitly specify all nested fields of the data."
] |
2,916,025,762
| 7,448
|
`datasets.disable_caching` doesn't work
|
open
| 2025-03-13T06:40:12
| 2025-03-22T04:37:07
| null |
https://github.com/huggingface/datasets/issues/7448
| null |
UCC-team
| false
|
[
"cc",
"Yes I have the same issue. It's a confusingly named function. See [here](https://github.com/huggingface/datasets/blob/main/src/datasets/fingerprint.py#L115-L130)\n\n```\n...\nIf disabled, the library will no longer reload cached datasets files when applying transforms to the datasets.\n More precisely, if the caching is disabled:\n - cache files are always recreated\n - cache files are written to a temporary directory that is deleted when session closes\n - cache files are named using a random hash instead of the dataset fingerprint\n```\n\nAlso, unfortunately the member variable `ds.cache_files` is not populated either.\n\nI'll let you know if I find a solution."
] |
2,915,233,248
| 7,447
|
Epochs shortened after resuming mid-epoch with Iterable dataset+StatefulDataloader(persistent_workers=True)
|
closed
| 2025-03-12T21:41:05
| 2025-07-09T23:04:57
| 2025-03-14T10:50:10
|
https://github.com/huggingface/datasets/issues/7447
| null |
dhruvdcoder
| false
|
[
"Thanks for reporting ! Maybe we should store the epoch in the state_dict, and then when the dataset is iterated on again after setting a new epoch it should restart from scratch instead of resuming ? wdyt ?",
"But why does this only happen when `persistent_workers=True`? I would expect it to work correctly even without storing the epoch number in the state_dict of the iterable dataset. ",
"I think persistent_workers=False simply ignores the dataset state_dict when it starts a new epoch, that's why the issue doesn't appear in that case",
"I opened https://github.com/huggingface/datasets/pull/7451 to fix the issue, let me know if it works for you",
"I just released `datasets` 3.4 that includes the fix :)\n\nPS: in your script you probably want to set the epoch like this, otherwise it's still set to 0 after the first epoch:\n\n```diff\n if state_dict is None:\n- ds.set_epoch(epoch)\n epoch += 1\n+ ds.set_epoch(epoch)\n```",
"@lhoestq \nIf I understand correctly, the issue was:\nwhen training saves a checkpoint of dataloader in epoch 1, the resumed training only consumes partial data in epoch 2, 3, etc.\n\nHowever, with the fix we are facing the issue that:\nwhen training saves a checkpoint of dataloader in epoch 2, the resumed training starts from scratch instead of consuming remaining partial data in epoch 2.\n\nThis makes training inconsistent between resuming from a checkpoint vs. original training if continued without a checkpoint."
] |
2,913,050,552
| 7,446
|
pyarrow.lib.ArrowTypeError: Expected dict key of type str or bytes, got 'int'
|
closed
| 2025-03-12T07:48:37
| 2025-07-04T05:14:45
| 2025-07-04T05:14:45
|
https://github.com/huggingface/datasets/issues/7446
| null |
rangehow
| false
|
[
"I think the Counter object you used in 'labels' may be the issue, since the {2:1} inside is the dict and 2 is the int",
"> I think the Counter object you used in 'labels' may be the issue, since the {2:1} inside is the dict and 2 is the int我认为您在 'labels' 中使用的 Counter 对象可能是问题所在,因为里面的 {2:1} 是 dict,而 2 是 int\n\nYes, that's the point."
] |
2,911,507,923
| 7,445
|
Fix small bugs with async map
|
closed
| 2025-03-11T18:30:57
| 2025-03-13T10:38:03
| 2025-03-13T10:37:58
|
https://github.com/huggingface/datasets/pull/7445
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7445",
"html_url": "https://github.com/huggingface/datasets/pull/7445",
"diff_url": "https://github.com/huggingface/datasets/pull/7445.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7445.patch",
"merged_at": "2025-03-13T10:37:58"
}
|
lhoestq
| true
|
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7445). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] |
2,911,202,445
| 7,444
|
Excessive warnings when resuming an IterableDataset+buffered shuffle+DDP.
|
open
| 2025-03-11T16:34:39
| 2025-05-13T09:41:03
| null |
https://github.com/huggingface/datasets/issues/7444
| null |
dhruvdcoder
| false
|
[
"I had a similar issue when loading the saved iterable dataset state to fast-forward to the mid-train location before resuming. This happened when I shuffled a concatenated dataset. A `iterable_data_state_dict.json` file was saved during checkpointing in the Trainer with:\n```\ndef _save_rng_state(self, output_dir):\n super()._save_rng_state(output_dir)\n if self.args.should_save:\n with open(os.path.join(output_dir, f'iterable_data_state_dict.json'), 'w', encoding='utf-8') as fo:\n json.dump(self.train_dataset.state_dict(), fo, ensure_ascii=False)\n```\nThen when resuming training, I use `load_state_dict` to get the dataset state:\n```\nif training_args.resume_from_checkpoint:\n if isinstance(training_args.resume_from_checkpoint, bool):\n resume_from_checkpoint = get_last_checkpoint(training_args.output_dir)\n else:\n resume_from_checkpoint = training_args.resume_from_checkpoint\n last_ckpt_iterable_data_state_dict_file_path = os.path.join(resume_from_checkpoint, f'iterable_data_state_dict.json')\n if not training_args.ignore_data_skip:\n raise ValueError(f'Please set `ignore_data_skip`=True to skip tokenization.')\n with open(last_ckpt_iterable_data_state_dict_file_path, 'r', encoding='utf-8') as f:\n train_dataset_state_dict = json.load(f)\n train_dataset.load_state_dict(train_dataset_state_dict)\n print(f'Loaded train_dataset state from {last_ckpt_iterable_data_state_dict_file_path}')\n```\n\nThen code works fine before I shuffled a subset of the training data to:\n```\nmath_dataset = concatenate_datasets([A, B]).to_iterable_dataset()\nshuffled_math_dataset = math_dataset.shuffle(seed=42, buffer_size=1000000)\n```\n\nOther than the warning, a real problem is that the loss bumped after loading a ckpt:\n\n<img width=\"400\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/c8944e81-9df9-4857-82de-6ab9ebc1b066\" />"
] |
2,908,585,656
| 7,443
|
index error when num_shards > len(dataset)
|
open
| 2025-03-10T22:40:59
| 2025-03-10T23:43:08
| null |
https://github.com/huggingface/datasets/issues/7443
| null |
eminorhan
| false
|
[
"Actually, looking at the code a bit more carefully, maybe an even better solution is to explicitly set `num_shards=len(self)` somewhere inside both `push_to_hub()` and `save_to_disk()` when these functions are invoked with `num_shards > len(dataset)`."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.