Dataset Viewer
Auto-converted to Parquet Duplicate
id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
3,292,315,241
https://api.github.com/repos/huggingface/datasets/issues/7724
https://github.com/huggingface/datasets/issues/7724
7,724
Can not stepinto load_dataset.py?
open
0
2025-08-05T09:28:51
2025-08-05T09:28:51
null
micklexqg
[]
I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ? <!-- Failed to upload "截图 2025-08-05 17-25-18.png" -->
false
3,289,943,261
https://api.github.com/repos/huggingface/datasets/issues/7723
https://github.com/huggingface/datasets/issues/7723
7,723
Don't remove `trust_remote_code` arg!!!
open
0
2025-08-04T15:42:07
2025-08-04T15:42:07
null
autosquid
[ "enhancement" ]
### Feature request defaulting it to False is nice balance. we need manully setting it to True in certain scenarios! Add `trust_remote_code` arg back please! ### Motivation defaulting it to False is nice balance. we need manully setting it to True in certain scenarios! ### Your contribution defaulting it to Fals...
false
3,289,741,064
https://api.github.com/repos/huggingface/datasets/issues/7722
https://github.com/huggingface/datasets/issues/7722
7,722
Out of memory even though using load_dataset(..., streaming=True)
open
0
2025-08-04T14:41:55
2025-08-04T14:41:55
null
padmalcom
[]
### Describe the bug I am iterating over a large dataset that I load using streaming=True to avoid running out of memory. Unfortunately, I am observing that memory usage increases over time and I'm finally running in an oom. ### Steps to reproduce the bug ``` ds = load_dataset("openslr/librispeech_asr", split="tra...
false
3,289,426,104
https://api.github.com/repos/huggingface/datasets/issues/7721
https://github.com/huggingface/datasets/issues/7721
7,721
Bad split error message when using percentages
open
0
2025-08-04T13:20:25
2025-08-04T14:48:09
null
padmalcom
[]
### Describe the bug Hi, I'm trying to download a dataset. To not load the entire dataset in memory, I split it as described [here](https://huggingface.co/docs/datasets/v4.0.0/loading#slice-splits) in 10% steps. When doing so, the library returns this error: raise ValueError(f"Bad split: {split}. Available splits...
false
3,287,150,513
https://api.github.com/repos/huggingface/datasets/issues/7720
https://github.com/huggingface/datasets/issues/7720
7,720
Datasets 4.0 map function causing column not found
open
0
2025-08-03T12:52:34
2025-08-03T12:52:34
null
Darejkal
[]
### Describe the bug Column returned after mapping is not found in new instance of the dataset. ### Steps to reproduce the bug Code for reproduction. After running get_total_audio_length, it is errored out due to `data` not having `duration` ``` def compute_duration(x): return {"duration": len(x["audio"]["array"...
false
3,285,928,491
https://api.github.com/repos/huggingface/datasets/issues/7719
https://github.com/huggingface/datasets/issues/7719
7,719
Specify dataset columns types in typehint
open
0
2025-08-02T13:22:31
2025-08-02T13:22:31
null
Samoed
[ "enhancement" ]
### Feature request Make dataset optionaly generic to datasets usage with type annotations like it was done in `torch.Dataloader` https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/torch/utils/data/dataloader.py#L131 ### Motivation In MTEB we're using a lot of datasets objects, but they...
false
3,284,221,177
https://api.github.com/repos/huggingface/datasets/issues/7718
https://github.com/huggingface/datasets/pull/7718
7,718
add support for pyarrow string view in features
open
0
2025-08-01T14:58:39
2025-08-01T15:00:45
null
onursatici
[]
null
true
3,282,855,127
https://api.github.com/repos/huggingface/datasets/issues/7717
https://github.com/huggingface/datasets/issues/7717
7,717
Cached dataset is not used when explicitly passing the cache_dir parameter
open
0
2025-08-01T07:12:41
2025-08-01T07:12:41
null
padmalcom
[]
### Describe the bug Hi, we are pre-downloading a dataset using snapshot_download(). When loading this exact dataset with load_dataset() the cached snapshot is not used. In both calls, I provide the cache_dir parameter. ### Steps to reproduce the bug ``` from datasets import load_dataset, concatenate_datasets from h...
false
3,281,204,362
https://api.github.com/repos/huggingface/datasets/issues/7716
https://github.com/huggingface/datasets/pull/7716
7,716
typo
closed
1
2025-07-31T17:14:45
2025-07-31T17:17:15
2025-07-31T17:14:51
lhoestq
[]
null
true
3,281,189,955
https://api.github.com/repos/huggingface/datasets/issues/7715
https://github.com/huggingface/datasets/pull/7715
7,715
Docs: Use Image(mode="F") for PNG/JPEG depth maps
closed
1
2025-07-31T17:09:49
2025-07-31T17:12:23
2025-07-31T17:10:10
lhoestq
[]
null
true
3,281,090,499
https://api.github.com/repos/huggingface/datasets/issues/7714
https://github.com/huggingface/datasets/pull/7714
7,714
fix num_proc=1 ci test
closed
1
2025-07-31T16:36:32
2025-07-31T16:39:03
2025-07-31T16:38:03
lhoestq
[]
null
true
3,280,813,699
https://api.github.com/repos/huggingface/datasets/issues/7713
https://github.com/huggingface/datasets/pull/7713
7,713
Update cli.mdx to refer to the new "hf" CLI
closed
1
2025-07-31T15:06:11
2025-07-31T16:37:56
2025-07-31T16:37:55
evalstate
[]
Update to refer to `hf auth login`
true
3,280,706,762
https://api.github.com/repos/huggingface/datasets/issues/7712
https://github.com/huggingface/datasets/pull/7712
7,712
Retry intermediate commits too
closed
1
2025-07-31T14:33:33
2025-07-31T14:37:43
2025-07-31T14:36:43
lhoestq
[]
null
true
3,280,471,353
https://api.github.com/repos/huggingface/datasets/issues/7711
https://github.com/huggingface/datasets/pull/7711
7,711
Update dataset_dict push_to_hub
closed
1
2025-07-31T13:25:03
2025-07-31T14:18:55
2025-07-31T14:18:53
lhoestq
[]
following https://github.com/huggingface/datasets/pull/7708
true
3,279,878,230
https://api.github.com/repos/huggingface/datasets/issues/7710
https://github.com/huggingface/datasets/pull/7710
7,710
Concurrent IterableDataset push_to_hub
closed
1
2025-07-31T10:11:31
2025-07-31T10:14:00
2025-07-31T10:12:52
lhoestq
[]
Same as https://github.com/huggingface/datasets/pull/7708 but for `IterableDataset`
true
3,276,677,990
https://api.github.com/repos/huggingface/datasets/issues/7709
https://github.com/huggingface/datasets/issues/7709
7,709
Release 4.0.0 breaks usage patterns of with_format
open
1
2025-07-30T11:34:53
2025-07-30T15:41:59
null
wittenator
[]
### Describe the bug Previously it was possible to access a whole column that was e.g. in numpy format via `with_format` by indexing the column. Now this possibility seems to be gone with the new Column() class. As far as I see, this makes working on a whole column (in-memory) more complex, i.e. normalizing an in-memo...
false
3,273,614,584
https://api.github.com/repos/huggingface/datasets/issues/7708
https://github.com/huggingface/datasets/pull/7708
7,708
Concurrent push_to_hub
closed
1
2025-07-29T13:14:30
2025-07-31T10:00:50
2025-07-31T10:00:49
lhoestq
[]
Retry the step that (download + update + upload) the README.md using `create_commit(..., parent_commit=...)` if there was a commit in the meantime. This should enable concurrent `push_to_hub()` since it won't overwrite the README.md metadata anymore. Note: we fixed an issue server side to make this work: <details...
true
3,271,867,998
https://api.github.com/repos/huggingface/datasets/issues/7707
https://github.com/huggingface/datasets/issues/7707
7,707
load_dataset() in 4.0.0 failed when decoding audio
closed
9
2025-07-29T03:25:03
2025-08-01T05:15:45
2025-08-01T05:15:45
jiqing-feng
[]
### Describe the bug Cannot decode audio data. ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") print(dataset[0]["audio"]["array"]) ``` 1st round run, got ``` File "/usr/local/lib/python3.1...
false
3,271,129,240
https://api.github.com/repos/huggingface/datasets/issues/7706
https://github.com/huggingface/datasets/pull/7706
7,706
Reimplemented partial split download support (revival of #6832)
open
1
2025-07-28T19:40:40
2025-07-29T09:25:12
null
ArjunJagdale
[]
(revival of #6832) https://github.com/huggingface/datasets/pull/7648#issuecomment-3084050130 Close https://github.com/huggingface/datasets/issues/4101, and more --- ### PR under work!!!!
true
3,269,070,499
https://api.github.com/repos/huggingface/datasets/issues/7705
https://github.com/huggingface/datasets/issues/7705
7,705
Can Not read installed dataset in dataset.load(.)
open
3
2025-07-28T09:43:54
2025-08-05T01:24:32
null
HuangChiEn
[]
Hi, folks, I'm newbie in huggingface dataset api. As title, i'm facing the issue that the dataset.load api can not connect to the installed dataset. code snippet : <img width="572" height="253" alt="Image" src="https://github.com/user-attachments/assets/10f48aaf-d6ca-4239-b1cf-145d74f125d1" /> data path : "/xxx/jose...
false
3,265,730,177
https://api.github.com/repos/huggingface/datasets/issues/7704
https://github.com/huggingface/datasets/pull/7704
7,704
Fix map() example in datasets documentation: define tokenizer before use
open
1
2025-07-26T14:18:17
2025-08-01T13:48:35
null
Sanjaykumar030
[]
## Problem The current datasets.Dataset.map() example in the documentation demonstrates batched processing using a tokenizer object without defining or importing it. This causes a NameError when users copy and run the example as-is, breaking the expected seamless experience. ## Correction This PR fixes the issue b...
true
3,265,648,942
https://api.github.com/repos/huggingface/datasets/issues/7703
https://github.com/huggingface/datasets/issues/7703
7,703
[Docs] map() example uses undefined `tokenizer` — causes NameError
open
1
2025-07-26T13:35:11
2025-07-27T09:44:35
null
Sanjaykumar030
[]
## Description The current documentation example for `datasets.Dataset.map()` demonstrates batched processing but uses a `tokenizer` object without defining or importing it. This causes an error every time it's copied. Here is the problematic line: ```python # process a batch of examples >>> ds = ds.map(lambda examp...
false
3,265,328,549
https://api.github.com/repos/huggingface/datasets/issues/7702
https://github.com/huggingface/datasets/pull/7702
7,702
num_proc=0 behave like None, num_proc=1 uses one worker (not main process) and clarify num_proc documentation
closed
3
2025-07-26T08:19:39
2025-07-31T14:52:33
2025-07-31T14:52:33
tanuj-rai
[]
Fixes issue #7700 This PR makes num_proc=0 behave like None in Dataset.map(), disabling multiprocessing. It improves UX by aligning with DataLoader(num_workers=0) behavior. The num_proc docstring is also updated to clearly explain valid values and behavior. @SunMarc
true
3,265,236,296
https://api.github.com/repos/huggingface/datasets/issues/7701
https://github.com/huggingface/datasets/pull/7701
7,701
Update fsspec max version to current release 2025.7.0
closed
2
2025-07-26T06:47:59
2025-07-28T11:58:11
2025-07-28T11:58:11
rootAvish
[]
Diffusers currently asks for a max fsspec version of `2025.3.0`. This change updates it to the current latest version. This change is mainly required to resolve conflicts with other packages in an environment. In my particular case, `aider-chat` which is a part of my environment installs `2025.5.1` which is incompatibl...
true
3,263,922,255
https://api.github.com/repos/huggingface/datasets/issues/7700
https://github.com/huggingface/datasets/issues/7700
7,700
[doc] map.num_proc needs clarification
open
0
2025-07-25T17:35:09
2025-07-25T17:39:36
null
sfc-gh-sbekman
[]
https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/main_classes#datasets.Dataset.map.num_proc ``` num_proc (int, optional, defaults to None) — Max number of processes when generating cache. Already cached shards are loaded sequentially. ``` for batch: ``` num_proc (int, optional, defaults to None): The n...
false
3,261,053,171
https://api.github.com/repos/huggingface/datasets/issues/7699
https://github.com/huggingface/datasets/issues/7699
7,699
Broken link in documentation for "Create a video dataset"
open
1
2025-07-24T19:46:28
2025-07-25T15:27:47
null
cleong110
[]
The link to "the [WebDataset documentation](https://webdataset.github.io/webdataset)." is broken. https://huggingface.co/docs/datasets/main/en/video_dataset#webdataset <img width="2048" height="264" alt="Image" src="https://github.com/user-attachments/assets/975dd10c-aad8-42fc-9fbc-de0e2747a326" />
false
3,255,350,916
https://api.github.com/repos/huggingface/datasets/issues/7698
https://github.com/huggingface/datasets/issues/7698
7,698
NotImplementedError when using streaming=True in Google Colab environment
open
2
2025-07-23T08:04:53
2025-07-23T15:06:23
null
Aniket17200
[]
### Describe the bug When attempting to load a large dataset (like tiiuae/falcon-refinedweb or allenai/c4) using streaming=True in a standard Google Colab notebook, the process fails with a NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet. This issue persists even after...
false
3,254,526,399
https://api.github.com/repos/huggingface/datasets/issues/7697
https://github.com/huggingface/datasets/issues/7697
7,697
-
closed
0
2025-07-23T01:30:32
2025-07-25T15:21:39
2025-07-25T15:21:39
kakamond
[]
-
false
3,253,433,350
https://api.github.com/repos/huggingface/datasets/issues/7696
https://github.com/huggingface/datasets/issues/7696
7,696
load_dataset() in 4.0.0 returns different audio samples compared to earlier versions breaking reproducibility
closed
2
2025-07-22T17:02:17
2025-07-30T14:22:21
2025-07-30T14:22:21
Manalelaidouni
[]
### Describe the bug In datasets 4.0.0 release, `load_dataset()` returns different audio samples compared to earlier versions, this breaks integration tests that depend on consistent sample data across different environments (first and second envs specified below). ### Steps to reproduce the bug ```python from dat...
false
3,251,904,843
https://api.github.com/repos/huggingface/datasets/issues/7695
https://github.com/huggingface/datasets/pull/7695
7,695
Support downloading specific splits in load_dataset
closed
4
2025-07-22T09:33:54
2025-07-28T17:33:30
2025-07-28T17:15:45
ArjunJagdale
[]
This PR builds on #6832 by @mariosasko. May close - #4101, #2538 Discussion - https://github.com/huggingface/datasets/pull/7648#issuecomment-3084050130 --- ### Note - This PR is under work and frequent changes will be pushed.
true
3,247,600,408
https://api.github.com/repos/huggingface/datasets/issues/7694
https://github.com/huggingface/datasets/issues/7694
7,694
Dataset.to_json consumes excessive memory, appears to not be a streaming operation
open
1
2025-07-21T07:51:25
2025-07-25T14:42:21
null
ycq0125
[]
### Describe the bug When exporting a Dataset object to a JSON Lines file using the .to_json(lines=True) method, the process consumes a very large amount of memory. The memory usage is proportional to the size of the entire Dataset object being saved, rather than being a low, constant memory operation. This behavior ...
false
3,246,369,678
https://api.github.com/repos/huggingface/datasets/issues/7693
https://github.com/huggingface/datasets/issues/7693
7,693
Dataset scripts are no longer supported, but found superb.py
open
8
2025-07-20T13:48:06
2025-07-30T15:01:03
null
edwinzajac
[]
### Describe the bug Hello, I'm trying to follow the [Hugging Face Pipelines tutorial](https://huggingface.co/docs/transformers/main_classes/pipelines) but the tutorial seems to work only on old datasets versions. I then get the error : ``` -------------------------------------------------------------------------- ...
false
3,246,268,635
https://api.github.com/repos/huggingface/datasets/issues/7692
https://github.com/huggingface/datasets/issues/7692
7,692
xopen: invalid start byte for streaming dataset with trust_remote_code=True
open
1
2025-07-20T11:08:20
2025-07-25T14:38:54
null
sedol1339
[]
### Describe the bug I am trying to load YODAS2 dataset with datasets==3.6.0 ``` from datasets import load_dataset next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True, trust_remote_code=True))) ``` And get `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid ...
false
3,245,547,170
https://api.github.com/repos/huggingface/datasets/issues/7691
https://github.com/huggingface/datasets/issues/7691
7,691
Large WebDataset: pyarrow.lib.ArrowCapacityError on load() even with streaming
open
5
2025-07-19T18:40:27
2025-07-25T08:51:10
null
cleong110
[]
### Describe the bug I am creating a large WebDataset-format dataset for sign language processing research, and a number of the videos are over 2GB. The instant I hit one of the shards with one of those videos, I get a ArrowCapacityError, even with streaming. I made a config for the dataset that specifically inclu...
false
3,244,380,691
https://api.github.com/repos/huggingface/datasets/issues/7690
https://github.com/huggingface/datasets/pull/7690
7,690
HDF5 support
open
2
2025-07-18T21:09:41
2025-07-28T21:32:12
null
klamike
[]
This PR adds support for tabular HDF5 file(s) by converting each row to an Arrow table. It supports columns with the usual dtypes including up to 5-dimensional arrays as well as support for complex/compound types by splitting them into several columns. All datasets within the HDF5 file should have rows on the first dim...
true
3,242,580,301
https://api.github.com/repos/huggingface/datasets/issues/7689
https://github.com/huggingface/datasets/issues/7689
7,689
BadRequestError for loading dataset?
closed
17
2025-07-18T09:30:04
2025-07-18T11:59:51
2025-07-18T11:52:29
WPoelman
[]
### Describe the bug Up until a couple days ago I was having no issues loading `Helsinki-NLP/europarl` and `Helsinki-NLP/un_pc`, but now suddenly I get the following error: ``` huggingface_hub.errors.BadRequestError: (Request ID: ...) Bad request: * Invalid input: expected array, received string * at paths * Invalid...
false
3,238,851,443
https://api.github.com/repos/huggingface/datasets/issues/7688
https://github.com/huggingface/datasets/issues/7688
7,688
No module named "distributed"
open
3
2025-07-17T09:32:35
2025-07-25T15:14:19
null
yingtongxiong
[]
### Describe the bug hello, when I run the command "from datasets.distributed import split_dataset_by_node", I always met the bug "No module named 'datasets.distributed" in different version like 4.0.0, 2.21.0 and so on. How can I solve this? ### Steps to reproduce the bug 1. pip install datasets 2. from datasets.di...
false
3,238,760,301
https://api.github.com/repos/huggingface/datasets/issues/7687
https://github.com/huggingface/datasets/issues/7687
7,687
Datasets keeps rebuilding the dataset every time i call the python script
open
1
2025-07-17T09:03:38
2025-07-25T15:21:31
null
CALEB789
[]
### Describe the bug Every time it runs, somehow, samples increase. This can cause a 12mb dataset to have other built versions of 400 mbs+ <img width="363" height="481" alt="Image" src="https://github.com/user-attachments/assets/766ce958-bd2b-41bc-b950-86710259bfdc" /> ### Steps to reproduce the bug `from datasets...
false
3,237,201,090
https://api.github.com/repos/huggingface/datasets/issues/7686
https://github.com/huggingface/datasets/issues/7686
7,686
load_dataset does not check .no_exist files in the hub cache
open
0
2025-07-16T20:04:00
2025-07-16T20:04:00
null
jmaccarl
[]
### Describe the bug I'm not entirely sure if this should be submitted as a bug in the `datasets` library or the `huggingface_hub` library, given it could be fixed at different levels of the stack. The fundamental issue is that the `load_datasets` api doesn't use the `.no_exist` files in the hub cache unlike other wr...
false
3,236,979,340
https://api.github.com/repos/huggingface/datasets/issues/7685
https://github.com/huggingface/datasets/issues/7685
7,685
Inconsistent range request behavior for parquet REST api
open
3
2025-07-16T18:39:44
2025-07-25T16:09:50
null
universalmind303
[]
### Describe the bug First off, I do apologize if this is not the correct repo for submitting this issue. Please direct me to another one if it's more appropriate elsewhere. The datasets rest api is inconsistently giving `416 Range Not Satisfiable` when using a range request to get portions of the parquet files. Mor...
false
3,231,680,474
https://api.github.com/repos/huggingface/datasets/issues/7684
https://github.com/huggingface/datasets/pull/7684
7,684
fix audio cast storage from array + sampling_rate
closed
1
2025-07-15T10:13:42
2025-07-15T10:24:08
2025-07-15T10:24:07
lhoestq
[]
fix https://github.com/huggingface/datasets/issues/7682
true
3,231,553,161
https://api.github.com/repos/huggingface/datasets/issues/7683
https://github.com/huggingface/datasets/pull/7683
7,683
Convert to string when needed + faster .zstd
closed
1
2025-07-15T09:37:44
2025-07-15T10:13:58
2025-07-15T10:13:56
lhoestq
[]
for https://huggingface.co/datasets/allenai/olmo-mix-1124
true
3,229,687,253
https://api.github.com/repos/huggingface/datasets/issues/7682
https://github.com/huggingface/datasets/issues/7682
7,682
Fail to cast Audio feature for numpy arrays in datasets 4.0.0
closed
2
2025-07-14T18:41:02
2025-07-15T12:10:39
2025-07-15T10:24:08
luatil-cloud
[]
### Describe the bug Casting features with Audio for numpy arrays - done here with `ds.map(gen_sine, features=features)` fails in version 4.0.0 but not in version 3.6.0 ### Steps to reproduce the bug The following `uv script` should be able to reproduce the bug in version 4.0.0 and pass in version 3.6.0 on a macOS ...
false
3,227,112,736
https://api.github.com/repos/huggingface/datasets/issues/7681
https://github.com/huggingface/datasets/issues/7681
7,681
Probabilistic High Memory Usage and Freeze on Python 3.10
open
0
2025-07-14T01:57:16
2025-07-14T01:57:16
null
ryan-minato
[]
### Describe the bug A probabilistic issue encountered when processing datasets containing PIL.Image columns using the huggingface/datasets library on Python 3.10. The process occasionally experiences a sudden and significant memory spike, reaching 100% utilization, leading to a complete freeze. During this freeze, th...
false
3,224,824,151
https://api.github.com/repos/huggingface/datasets/issues/7680
https://github.com/huggingface/datasets/issues/7680
7,680
Question about iterable dataset and streaming
open
8
2025-07-12T04:48:30
2025-08-01T13:01:48
null
Tavish9
[]
In the doc, I found the following example: https://github.com/huggingface/datasets/blob/611f5a592359ebac6f858f515c776aa7d99838b2/docs/source/stream.mdx?plain=1#L65-L78 I am confused, 1. If we have already loaded the dataset, why doing `to_iterable_dataset`? Does it go through the dataset faster than map-style datase...
false
3,220,787,371
https://api.github.com/repos/huggingface/datasets/issues/7679
https://github.com/huggingface/datasets/issues/7679
7,679
metric glue breaks with 4.0.0
closed
2
2025-07-10T21:39:50
2025-07-11T17:42:01
2025-07-11T17:42:01
stas00
[]
### Describe the bug worked fine with 3.6.0, and with 4.0.0 `eval_metric = metric.compute()` in HF Accelerate breaks. The code that fails is: https://huggingface.co/spaces/evaluate-metric/glue/blob/v0.4.0/glue.py#L84 ``` def simple_accuracy(preds, labels): print(preds, labels) print(f"{preds==labels}") r...
false
3,218,625,544
https://api.github.com/repos/huggingface/datasets/issues/7678
https://github.com/huggingface/datasets/issues/7678
7,678
To support decoding audio data, please install 'torchcodec'.
closed
2
2025-07-10T09:43:13
2025-07-22T03:46:52
2025-07-11T05:05:42
alpcansoydas
[]
In the latest version of datasets==4.0.0, i cannot print the audio data on the Colab notebook. But it works on the 3.6.0 version. !pip install -q -U datasets huggingface_hub fsspec from datasets import load_dataset downloaded_dataset = load_dataset("ymoslem/MediaSpeech", "tr", split="train") print(downloaded_datase...
false
3,218,044,656
https://api.github.com/repos/huggingface/datasets/issues/7677
https://github.com/huggingface/datasets/issues/7677
7,677
Toxicity fails with datasets 4.0.0
closed
2
2025-07-10T06:15:22
2025-07-11T04:40:59
2025-07-11T04:40:59
serena-ruan
[]
### Describe the bug With the latest 4.0.0 release, huggingface toxicity evaluation module fails with error: `ValueError: text input must be of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).` ### Steps to reproduce the bug Repro:...
false
3,216,857,559
https://api.github.com/repos/huggingface/datasets/issues/7676
https://github.com/huggingface/datasets/issues/7676
7,676
Many things broken since the new 4.0.0 release
open
13
2025-07-09T18:59:50
2025-07-21T10:38:01
null
mobicham
[]
### Describe the bug The new changes in 4.0.0 are breaking many datasets, including those from lm-evaluation-harness. I am trying to revert back to older versions, like 3.6.0 to make the eval work but I keep getting: ``` Python File /venv/main/lib/python3.12/site-packages/datasets/features/features.py:1474, in genera...
false
3,216,699,094
https://api.github.com/repos/huggingface/datasets/issues/7675
https://github.com/huggingface/datasets/issues/7675
7,675
common_voice_11_0.py failure in dataset library
open
5
2025-07-09T17:47:59
2025-07-22T09:35:42
null
egegurel
[]
### Describe the bug I tried to download dataset but have got this error: from datasets import load_dataset load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True) --------------------------------------------------------------------------- RuntimeError Tr...
false
3,216,251,069
https://api.github.com/repos/huggingface/datasets/issues/7674
https://github.com/huggingface/datasets/pull/7674
7,674
set dev version
closed
1
2025-07-09T15:01:25
2025-07-09T15:04:01
2025-07-09T15:01:33
lhoestq
[]
null
true
3,216,075,633
https://api.github.com/repos/huggingface/datasets/issues/7673
https://github.com/huggingface/datasets/pull/7673
7,673
Release: 4.0.0
closed
1
2025-07-09T14:03:16
2025-07-09T14:36:19
2025-07-09T14:36:18
lhoestq
[]
null
true
3,215,287,164
https://api.github.com/repos/huggingface/datasets/issues/7672
https://github.com/huggingface/datasets/pull/7672
7,672
Fix double sequence
closed
1
2025-07-09T09:53:39
2025-07-09T09:56:29
2025-07-09T09:56:28
lhoestq
[]
```python >>> Features({"a": Sequence(Sequence({"c": Value("int64")}))}) {'a': List({'c': List(Value('int64'))})} ``` instead of `{'a': {'c': List(List(Value('int64')))}}`
true
3,213,223,886
https://api.github.com/repos/huggingface/datasets/issues/7671
https://github.com/huggingface/datasets/issues/7671
7,671
Mapping function not working if the first example is returned as None
closed
2
2025-07-08T17:07:47
2025-07-09T12:30:32
2025-07-09T12:30:32
dnaihao
[]
### Describe the bug https://github.com/huggingface/datasets/blob/8a19de052e3d79f79cea26821454bbcf0e9dcd68/src/datasets/arrow_dataset.py#L3652C29-L3652C37 Here we can see the writer is initialized on `i==0`. However, there can be cases where in the user mapping function, the first example is filtered out (length cons...
false
3,208,962,372
https://api.github.com/repos/huggingface/datasets/issues/7670
https://github.com/huggingface/datasets/pull/7670
7,670
Fix audio bytes
closed
1
2025-07-07T13:05:15
2025-07-07T13:07:47
2025-07-07T13:05:33
lhoestq
[]
null
true
3,203,541,091
https://api.github.com/repos/huggingface/datasets/issues/7669
https://github.com/huggingface/datasets/issues/7669
7,669
How can I add my custom data to huggingface datasets
open
1
2025-07-04T19:19:54
2025-07-05T18:19:37
null
xiagod
[]
I want to add my custom dataset in huggingface dataset. Please guide me how to achieve that.
false
3,199,039,322
https://api.github.com/repos/huggingface/datasets/issues/7668
https://github.com/huggingface/datasets/issues/7668
7,668
Broken EXIF crash the whole program
open
1
2025-07-03T11:24:15
2025-07-03T12:27:16
null
Seas0
[]
### Describe the bug When parsing this image in the ImageNet1K dataset, the `datasets` crashs whole training process just because unable to parse an invalid EXIF tag. ![Image](https://github.com/user-attachments/assets/3c840203-ac8c-41a0-9cf7-45f64488037d) ### Steps to reproduce the bug Use the `datasets.Image.decod...
false
3,196,251,707
https://api.github.com/repos/huggingface/datasets/issues/7667
https://github.com/huggingface/datasets/pull/7667
7,667
Fix infer list of images
closed
1
2025-07-02T15:07:58
2025-07-02T15:10:28
2025-07-02T15:08:03
lhoestq
[]
cc @kashif
true
3,196,220,722
https://api.github.com/repos/huggingface/datasets/issues/7666
https://github.com/huggingface/datasets/pull/7666
7,666
Backward compat list feature
closed
1
2025-07-02T14:58:00
2025-07-02T15:00:37
2025-07-02T14:59:40
lhoestq
[]
cc @kashif
true
3,193,239,955
https://api.github.com/repos/huggingface/datasets/issues/7665
https://github.com/huggingface/datasets/issues/7665
7,665
Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files
closed
1
2025-07-01T17:14:53
2025-07-01T17:17:48
2025-07-01T17:17:48
zdzichukowalski
[]
### Describe the bug When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field as if it were part of the dataset schema. In my case there is a field `body:` with a string value ``` "### Describe the bug (...) ,action:...
false
3,193,239,035
https://api.github.com/repos/huggingface/datasets/issues/7664
https://github.com/huggingface/datasets/issues/7664
7,664
Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files
open
6
2025-07-01T17:14:32
2025-07-09T13:14:11
null
zdzichukowalski
[]
### Describe the bug When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field as if it were part of the dataset schema. In my case there is a field `body:` with a string value ``` "### Describe the bug (...) ,action:...
false
3,192,582,371
https://api.github.com/repos/huggingface/datasets/issues/7663
https://github.com/huggingface/datasets/pull/7663
7,663
Custom metadata filenames
closed
1
2025-07-01T13:50:36
2025-07-01T13:58:41
2025-07-01T13:58:39
lhoestq
[]
example: https://huggingface.co/datasets/lhoestq/overlapping-subsets-imagefolder/tree/main To make multiple subsets for an imagefolder (one metadata file per subset), e.g. ```yaml configs: - config_name: default metadata_filenames: - metadata.csv - config_name: other metadata_filenames: ...
true
3,190,805,531
https://api.github.com/repos/huggingface/datasets/issues/7662
https://github.com/huggingface/datasets/issues/7662
7,662
Applying map after transform with multiprocessing will cause OOM
open
5
2025-07-01T05:45:57
2025-07-10T06:17:40
null
JunjieLl
[]
### Describe the bug I have a 30TB dataset. When I perform add_column and cast_column operations on it and then execute a multiprocessing map, it results in an OOM (Out of Memory) error. However, if I skip the add_column and cast_column steps and directly run the map, there is no OOM. After debugging step by step, I f...
false
3,190,408,237
https://api.github.com/repos/huggingface/datasets/issues/7661
https://github.com/huggingface/datasets/pull/7661
7,661
fix del tqdm lock error
open
0
2025-07-01T02:04:02
2025-07-08T01:38:46
null
Hypothesis-Z
[]
fixes https://github.com/huggingface/datasets/issues/7660
true
3,189,028,251
https://api.github.com/repos/huggingface/datasets/issues/7660
https://github.com/huggingface/datasets/issues/7660
7,660
AttributeError: type object 'tqdm' has no attribute '_lock'
open
2
2025-06-30T15:57:16
2025-07-03T15:14:27
null
Hypothesis-Z
[]
### Describe the bug `AttributeError: type object 'tqdm' has no attribute '_lock'` It occurs when I'm trying to load datasets in thread pool. Issue https://github.com/huggingface/datasets/issues/6066 and PR https://github.com/huggingface/datasets/pull/6067 https://github.com/huggingface/datasets/pull/6068 tried to f...
false
3,187,882,217
https://api.github.com/repos/huggingface/datasets/issues/7659
https://github.com/huggingface/datasets/pull/7659
7,659
Update the beans dataset link in Preprocess
closed
0
2025-06-30T09:58:44
2025-07-07T08:38:19
2025-07-01T14:01:42
HJassar
[]
In the Preprocess tutorial, the to "the beans dataset" is incorrect. Fixed.
true
3,187,800,504
https://api.github.com/repos/huggingface/datasets/issues/7658
https://github.com/huggingface/datasets/pull/7658
7,658
Fix: Prevent loss of info.features and column_names in IterableDatasetDict.map when features is None
closed
5
2025-06-30T09:31:12
2025-07-01T16:26:30
2025-07-01T16:26:12
ArjunJagdale
[]
This PR fixes a bug where calling `IterableDatasetDict.map()` or `IterableDataset.map()` with the default `features=None` argument would overwrite the existing `info.features` attribute with `None`. This, in turn, caused the resulting dataset to lose its schema, breaking downstream usage of attributes like `column_name...
true
3,186,036,016
https://api.github.com/repos/huggingface/datasets/issues/7657
https://github.com/huggingface/datasets/pull/7657
7,657
feat: add subset_name as alias for name in load_dataset
open
0
2025-06-29T10:39:00
2025-07-18T17:45:41
null
ArjunJagdale
[]
fixes #7637 This PR introduces subset_name as a user-facing alias for the name (previously `config_name`) argument in load_dataset. It aligns terminology with the Hugging Face Hub UI (which shows “Subset”), reducing confusion for new users. Supports `subset_name` in `load_dataset()` Adds `.subset_name` propert...
true
3,185,865,686
https://api.github.com/repos/huggingface/datasets/issues/7656
https://github.com/huggingface/datasets/pull/7656
7,656
fix(iterable): ensure MappedExamplesIterable supports state_dict for resume
open
0
2025-06-29T07:50:13
2025-06-29T07:50:13
null
ArjunJagdale
[]
Fixes #7630 ### Problem When calling `.map()` on an `IterableDataset`, resuming from a checkpoint skips a large number of samples. This is because `MappedExamplesIterable` did not implement `state_dict()` or `load_state_dict()`, so checkpointing was not properly delegated to the underlying iterable. ### What Thi...
true
3,185,382,105
https://api.github.com/repos/huggingface/datasets/issues/7655
https://github.com/huggingface/datasets/pull/7655
7,655
Added specific use cases in Improve Performace
open
0
2025-06-28T19:00:32
2025-06-28T19:00:32
null
ArjunJagdale
[]
Fixes #2494
true
3,184,770,992
https://api.github.com/repos/huggingface/datasets/issues/7654
https://github.com/huggingface/datasets/pull/7654
7,654
fix(load): strip deprecated use_auth_token from config_kwargs
open
0
2025-06-28T09:20:21
2025-06-28T09:20:21
null
ArjunJagdale
[]
Fixes #7504 This PR resolves a compatibility error when loading datasets via `load_dataset()` using outdated arguments like `use_auth_token`. **What was happening:** Users passing `use_auth_token` in `load_dataset(..., use_auth_token=...)` encountered a `ValueError`: BuilderConfig ParquetConfig(...) doesn't have...
true
3,184,746,093
https://api.github.com/repos/huggingface/datasets/issues/7653
https://github.com/huggingface/datasets/pull/7653
7,653
feat(load): fallback to `load_from_disk()` when loading a saved dataset directory
open
0
2025-06-28T08:47:36
2025-06-28T08:47:36
null
ArjunJagdale
[]
### Related Issue Fixes #7503 Partially addresses #5044 by allowing `load_dataset()` to auto-detect and gracefully delegate to `load_from_disk()` for locally saved datasets. --- ### What does this PR do? This PR introduces a minimal fallback mechanism in `load_dataset()` that detects when the provided `p...
true
3,183,372,055
https://api.github.com/repos/huggingface/datasets/issues/7652
https://github.com/huggingface/datasets/pull/7652
7,652
Add columns support to JSON loader for selective key filtering
open
3
2025-06-27T16:18:42
2025-07-14T10:41:53
null
ArjunJagdale
[]
Fixes #7594 This PR adds support for filtering specific columns when loading datasets from .json or .jsonl files — similar to how the columns=... argument works for Parquet. As suggested, support for the `columns=...` argument (previously available for Parquet) has now been extended to **JSON and JSONL** loading v...
true
3,182,792,775
https://api.github.com/repos/huggingface/datasets/issues/7651
https://github.com/huggingface/datasets/pull/7651
7,651
fix: Extended metadata file names for folder_based_builder
open
0
2025-06-27T13:12:11
2025-06-30T08:19:37
null
iPieter
[]
Fixes #7650. The metadata files generated by the `DatasetDict.save_to_file` function are not included in the folder_based_builder's metadata list, causing issues when only 1 actual data file is present, as described in issue #7650. This PR adds these filenames to the builder, allowing correct loading.
true
3,182,745,315
https://api.github.com/repos/huggingface/datasets/issues/7650
https://github.com/huggingface/datasets/issues/7650
7,650
`load_dataset` defaults to json file format for datasets with 1 shard
open
0
2025-06-27T12:54:25
2025-06-27T12:54:25
null
iPieter
[]
### Describe the bug I currently have multiple datasets (train+validation) saved as 50MB shards. For one dataset the validation pair is small enough to fit into a single shard and this apparently causes problems when loading the dataset. I created the datasets using a DatasetDict, saved them as 50MB arrow files for st...
false
3,181,481,444
https://api.github.com/repos/huggingface/datasets/issues/7649
https://github.com/huggingface/datasets/pull/7649
7,649
Enable parallel shard upload in push_to_hub() using num_proc
closed
2
2025-06-27T05:59:03
2025-07-07T18:13:53
2025-07-07T18:13:52
ArjunJagdale
[]
Fixes #7591 ### Add num_proc support to `push_to_hub()` for parallel shard upload This PR adds support for parallel upload of dataset shards via the `num_proc` argument in `Dataset.push_to_hub()`. 📌 While the `num_proc` parameter was already present in the `push_to_hub()` signature and correctly passed to `_p...
true
3,181,409,736
https://api.github.com/repos/huggingface/datasets/issues/7648
https://github.com/huggingface/datasets/pull/7648
7,648
Fix misleading add_column() usage example in docstring
closed
8
2025-06-27T05:27:04
2025-07-28T19:42:34
2025-07-17T13:14:17
ArjunJagdale
[]
Fixes #7611 This PR fixes the usage example in the Dataset.add_column() docstring, which previously implied that add_column() modifies the dataset in-place. Why: The method returns a new dataset with the additional column, and users must assign the result to a variable to preserve the change. This should make...
true
3,178,952,517
https://api.github.com/repos/huggingface/datasets/issues/7647
https://github.com/huggingface/datasets/issues/7647
7,647
loading mozilla-foundation--common_voice_11_0 fails
open
2
2025-06-26T12:23:48
2025-07-10T14:49:30
null
pavel-esir
[]
### Describe the bug Hello everyone, i am trying to load `mozilla-foundation--common_voice_11_0` and it fails. Reproducer ``` import datasets datasets.load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True, trust_remote_code=True) ``` and it fails with ``` File ~/opt/envs/.../lib/py...
false
3,178,036,854
https://api.github.com/repos/huggingface/datasets/issues/7646
https://github.com/huggingface/datasets/pull/7646
7,646
Introduces automatic subset-level grouping for folder-based dataset builders #7066
open
4
2025-06-26T07:01:37
2025-07-14T10:42:56
null
ArjunJagdale
[]
Fixes #7066 This PR introduces automatic **subset-level grouping** for folder-based dataset builders by: 1. Adding a utility function `group_files_by_subset()` that clusters files by root name (ignoring digits and shard suffixes). 2. Integrating this logic into `FolderBasedBuilder._split_generators()` to yield one...
true
3,176,810,164
https://api.github.com/repos/huggingface/datasets/issues/7645
https://github.com/huggingface/datasets/pull/7645
7,645
`ClassLabel` docs: Correct value for unknown labels
open
0
2025-06-25T20:01:35
2025-06-25T20:01:35
null
l-uuz
[]
This small change fixes the documentation to to be compliant with what happens in `encode_example`. https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/features/features.py#L1126-L1129
true
3,176,363,492
https://api.github.com/repos/huggingface/datasets/issues/7644
https://github.com/huggingface/datasets/pull/7644
7,644
fix sequence ci
closed
1
2025-06-25T17:07:55
2025-06-25T17:10:30
2025-06-25T17:08:01
lhoestq
[]
fix error from https://github.com/huggingface/datasets/pull/7643
true
3,176,354,431
https://api.github.com/repos/huggingface/datasets/issues/7643
https://github.com/huggingface/datasets/pull/7643
7,643
Backward compat sequence instance
closed
1
2025-06-25T17:05:09
2025-06-25T17:07:40
2025-06-25T17:05:44
lhoestq
[]
useful to still get `isinstance(Sequence(Value("int64")), Sequence)`for downstream libs like evaluate
true
3,176,025,890
https://api.github.com/repos/huggingface/datasets/issues/7642
https://github.com/huggingface/datasets/pull/7642
7,642
fix length for ci
closed
0
2025-06-25T15:10:38
2025-06-25T15:11:53
2025-06-25T15:11:51
lhoestq
[]
null
true
3,175,953,405
https://api.github.com/repos/huggingface/datasets/issues/7641
https://github.com/huggingface/datasets/pull/7641
7,641
update docs and docstrings
closed
1
2025-06-25T14:48:58
2025-06-25T14:51:46
2025-06-25T14:49:33
lhoestq
[]
null
true
3,175,914,924
https://api.github.com/repos/huggingface/datasets/issues/7640
https://github.com/huggingface/datasets/pull/7640
7,640
better features repr
closed
1
2025-06-25T14:37:32
2025-06-25T14:46:47
2025-06-25T14:46:45
lhoestq
[]
following the addition of List in #7634 before: ```python In [3]: ds.features Out[3]: {'json': {'id': Value(dtype='string', id=None), 'metadata:transcript': [{'end': Value(dtype='float64', id=None), 'start': Value(dtype='float64', id=None), 'transcript': Value(dtype='string', id=None), 'wor...
true
3,175,616,169
https://api.github.com/repos/huggingface/datasets/issues/7639
https://github.com/huggingface/datasets/pull/7639
7,639
fix save_infos
closed
1
2025-06-25T13:16:26
2025-06-25T13:19:33
2025-06-25T13:16:33
lhoestq
[]
null
true
3,172,645,391
https://api.github.com/repos/huggingface/datasets/issues/7638
https://github.com/huggingface/datasets/pull/7638
7,638
Add ignore_decode_errors option to Image feature for robust decoding #7612
open
4
2025-06-24T16:47:51
2025-07-04T07:07:30
null
ArjunJagdale
[]
This PR implements support for robust image decoding in the `Image` feature, as discussed in issue #7612. ## 🔧 What was added - A new boolean field: `ignore_decode_errors` (default: `False`) - If set to `True`, any exceptions during decoding will be caught, and `None` will be returned instead of raising an error ...
true
3,171,883,522
https://api.github.com/repos/huggingface/datasets/issues/7637
https://github.com/huggingface/datasets/issues/7637
7,637
Introduce subset_name as an alias of config_name
open
4
2025-06-24T12:49:01
2025-07-01T16:08:33
null
albertvillanova
[ "enhancement" ]
### Feature request Add support for `subset_name` as an alias for `config_name` in the datasets library and related tools (such as loading scripts, documentation, and metadata). ### Motivation The Hugging Face Hub dataset viewer displays a column named **"Subset"**, which refers to what is currently technically call...
false
3,170,878,167
https://api.github.com/repos/huggingface/datasets/issues/7636
https://github.com/huggingface/datasets/issues/7636
7,636
"open" in globals()["__builtins__"], an error occurs: "TypeError: argument of type 'module' is not iterable"
open
4
2025-06-24T08:09:39
2025-07-10T04:13:16
null
kuanyan9527
[]
When I run the following code, an error occurs: "TypeError: argument of type 'module' is not iterable" ```python print("open" in globals()["__builtins__"]) ``` Traceback (most recent call last): File "./main.py", line 2, in <module> print("open" in globals()["__builtins__"]) ^^^^^^^^^^^^^^^^^^^^^^ TypeE...
false
3,170,486,408
https://api.github.com/repos/huggingface/datasets/issues/7635
https://github.com/huggingface/datasets/pull/7635
7,635
Fix: Preserve float columns in JSON loader when values are integer-like (e.g. 0.0, 1.0)
open
0
2025-06-24T06:16:48
2025-06-24T06:16:48
null
ArjunJagdale
[]
This PR fixes a bug in the JSON loader where columns containing float values like `[0.0, 1.0, 2.0]` were being implicitly coerced to `int`, due to pandas or Arrow type inference. This caused issues downstream in statistics computation (e.g., dataset-viewer) where such columns were incorrectly labeled as `"int"` inst...
true
3,169,389,653
https://api.github.com/repos/huggingface/datasets/issues/7634
https://github.com/huggingface/datasets/pull/7634
7,634
Replace Sequence by List
closed
1
2025-06-23T20:35:48
2025-06-25T13:59:13
2025-06-25T13:59:11
lhoestq
[]
Sequence is just a utility that we need to keep for backward compatibility. And `[ ]` was used instead but doesn't allow passing the length of the list. This PR removes most mentions of Sequence and usage of `[ ]` and defines a proper List type instead. before: `Sequence(Value("int64"))` or `[Value("int64")]` no...
true
3,168,399,637
https://api.github.com/repos/huggingface/datasets/issues/7633
https://github.com/huggingface/datasets/issues/7633
7,633
Proposal: Small Tamil Discourse Coherence Dataset.
open
0
2025-06-23T14:24:40
2025-06-23T14:24:40
null
bikkiNitSrinagar
[]
I’m a beginner from NIT Srinagar proposing a dataset of 50 Tamil text pairs for discourse coherence (coherent/incoherent labels) to support NLP research in low-resource languages. - Size: 50 samples - Format: CSV with columns (text1, text2, label) - Use case: Training NLP models for coherence I’ll use GitHub’s web edit...
false
3,168,283,589
https://api.github.com/repos/huggingface/datasets/issues/7632
https://github.com/huggingface/datasets/issues/7632
7,632
Graceful Error Handling for cast_column("image", Image(decode=True)) in Hugging Face Datasets
open
2
2025-06-23T13:49:24
2025-07-08T06:52:53
null
ganiket19
[ "enhancement" ]
### Feature request Currently, when using dataset.cast_column("image", Image(decode=True)), the pipeline throws an error and halts if any image in the dataset is invalid or corrupted (e.g., truncated files, incorrect formats, unreachable URLs). This behavior disrupts large-scale processing where a few faulty samples a...
false
3,165,127,657
https://api.github.com/repos/huggingface/datasets/issues/7631
https://github.com/huggingface/datasets/pull/7631
7,631
Pass user-agent from DownloadConfig into fsspec storage_options
open
1
2025-06-21T14:22:25
2025-06-21T14:25:28
null
ArjunJagdale
[]
Fixes part of issue #6046 ### Problem The `user-agent` defined in `DownloadConfig` was not passed down to fsspec-based filesystems like `HfFileSystem`, which prevents proper identification/tracking of client requests. ### Solution Added support for injecting the `user-agent` into `storage_options["headers"]` wi...
true
3,164,650,900
https://api.github.com/repos/huggingface/datasets/issues/7630
https://github.com/huggingface/datasets/issues/7630
7,630
[bug] resume from ckpt skips samples if .map is applied
open
2
2025-06-21T01:50:03
2025-06-29T07:51:32
null
felipemello1
[]
### Describe the bug resume from ckpt skips samples if .map is applied Maybe related: https://github.com/huggingface/datasets/issues/7538 ### Steps to reproduce the bug ```python from datasets import Dataset from datasets.distributed import split_dataset_by_node # Create dataset with map transformation def create...
false
3,161,169,782
https://api.github.com/repos/huggingface/datasets/issues/7629
https://github.com/huggingface/datasets/pull/7629
7,629
Add test for `as_iterable_dataset()` method in DatasetBuilder
open
0
2025-06-19T19:23:55
2025-06-19T19:23:55
null
ArjunJagdale
[]
This PR adds a test for the new `as_iterable_dataset()` method introduced in PR #7628. The test: - Loads a builder using `load_dataset_builder("c4", "en")` - Runs `download_and_prepare()` - Streams examples using `builder.as_iterable_dataset(split="train[:100]")` - Verifies streamed examples contain the "text" f...
true
3,161,156,461
https://api.github.com/repos/huggingface/datasets/issues/7628
https://github.com/huggingface/datasets/pull/7628
7,628
Add `as_iterable_dataset()` method to DatasetBuilder for streaming from cached Arrow files
open
0
2025-06-19T19:15:41
2025-06-19T19:15:41
null
ArjunJagdale
[]
This PR implements `builder.as_iterable_dataset(split=...)` as discussed in #5481. It allows users to load an `IterableDataset` directly from cached Arrow files (using ArrowReader and ArrowExamplesIterable), without loading the full dataset into memory. This is useful for large-scale training scenarios where memo...
true
3,160,544,390
https://api.github.com/repos/huggingface/datasets/issues/7627
https://github.com/huggingface/datasets/issues/7627
7,627
Creating a HF Dataset from lakeFS with S3 storage takes too much time!
closed
1
2025-06-19T14:28:41
2025-06-23T12:39:10
2025-06-23T12:39:10
Thunderhead-exe
[]
Hi, I’m new to HF dataset and I tried to create datasets based on data versioned in **lakeFS** _(**MinIO** S3 bucket as storage backend)_ Here I’m using ±30000 PIL image from MNIST data however it is taking around 12min to execute, which is a lot! From what I understand, it is loading the images into cache then buil...
false
3,159,322,138
https://api.github.com/repos/huggingface/datasets/issues/7626
https://github.com/huggingface/datasets/pull/7626
7,626
feat(map): reuse unchanged columns when input_columns specified to reduce disk usage (#6013)
closed
0
2025-06-19T07:41:45
2025-07-28T17:39:12
2025-07-28T17:39:12
ArjunJagdale
[]
## Summary This PR addresses [#6013](https://github.com/huggingface/datasets/issues/6013) by reusing unchanged columns from the original dataset in the `map()` method when `input_columns` is specified. ## What’s Implemented - Injected logic at the end of `Dataset.map()` to: - Identify untouched columns not ...
true
3,159,016,001
https://api.github.com/repos/huggingface/datasets/issues/7625
https://github.com/huggingface/datasets/pull/7625
7,625
feat: Add h5folder dataset loader for HDF5 support
open
3
2025-06-19T05:39:10
2025-06-26T05:44:26
null
ArjunJagdale
[]
### Related Issue Closes #3113 ### What does this PR do? This PR introduces a new dataset loader module called **`h5folder`** to support loading datasets stored in **HDF5 (.h5)** format. It allows users to do: ```python from datasets import load_dataset dataset = load_dataset("h5folder", data_dir="path/t...
true
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
3