id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
2,715,907,267
https://api.github.com/repos/huggingface/datasets/issues/7305
https://github.com/huggingface/datasets/issues/7305
7,305
Build Documentation Test Fails Due to "Bad Credentials" Error
open
2
2024-12-03T20:22:54
2025-01-08T22:38:14
null
ruidazeng
[]
### Describe the bug The `Build documentation / build / build_main_documentation (push)` job is consistently failing during the "Syncing repository" step. The error occurs when attempting to determine the default branch name, resulting in "Bad credentials" errors. ### Steps to reproduce the bug 1. Trigger the `build...
false
2,715,179,811
https://api.github.com/repos/huggingface/datasets/issues/7304
https://github.com/huggingface/datasets/pull/7304
7,304
Update iterable_dataset.py
closed
1
2024-12-03T14:25:42
2024-12-03T14:28:10
2024-12-03T14:27:02
lhoestq
[]
close https://github.com/huggingface/datasets/issues/7297
true
2,705,729,696
https://api.github.com/repos/huggingface/datasets/issues/7303
https://github.com/huggingface/datasets/issues/7303
7,303
DataFilesNotFoundError for datasets LM1B
closed
1
2024-11-29T17:27:45
2024-12-11T13:22:47
2024-12-11T13:22:47
hml1996-fight
[]
### Describe the bug Cannot load the dataset https://huggingface.co/datasets/billion-word-benchmark/lm1b ### Steps to reproduce the bug `dataset = datasets.load_dataset('lm1b', split=split)` ### Expected behavior `Traceback (most recent call last): File "/home/hml/projects/DeepLearning/Generative_model/Diffusio...
false
2,702,626,386
https://api.github.com/repos/huggingface/datasets/issues/7302
https://github.com/huggingface/datasets/pull/7302
7,302
Let server decide default repo visibility
closed
2
2024-11-28T16:01:13
2024-11-29T17:00:40
2024-11-29T17:00:38
Wauplin
[]
Until now, all repos were public by default when created without passing the `private` argument. This meant that passing `private=False` or `private=None` was strictly the same. This is not the case anymore. Enterprise Hub offers organizations to set a default visibility setting for new repos. This is useful for organi...
true
2,701,813,922
https://api.github.com/repos/huggingface/datasets/issues/7301
https://github.com/huggingface/datasets/pull/7301
7,301
update load_dataset doctring
closed
1
2024-11-28T11:19:20
2024-11-29T10:31:43
2024-11-29T10:31:40
lhoestq
[]
- remove canonical dataset name - remove dataset script logic - add streaming info - clearer download and prepare steps
true
2,701,424,320
https://api.github.com/repos/huggingface/datasets/issues/7300
https://github.com/huggingface/datasets/pull/7300
7,300
fix: update elasticsearch version
closed
2
2024-11-28T09:14:21
2024-12-03T14:36:56
2024-12-03T14:24:42
ruidazeng
[]
This should fix the `test_py311 (windows latest, deps-latest` errors. ``` =========================== short test summary info =========================== ERROR tests/test_search.py - AttributeError: `np.float_` was removed in the NumPy 2.0 release. Use `np.float64` instead. ERROR tests/test_search.py - AttributeE...
true
2,695,378,251
https://api.github.com/repos/huggingface/datasets/issues/7299
https://github.com/huggingface/datasets/issues/7299
7,299
Efficient Image Augmentation in Hugging Face Datasets
open
0
2024-11-26T16:50:32
2024-11-26T16:53:53
null
fabiozappo
[]
### Describe the bug I'm using the Hugging Face datasets library to load images in batch and would like to apply a torchvision transform to solve the inconsistent image sizes in the dataset and apply some on the fly image augmentation. I can just think about using the collate_fn, but seems quite inefficient. ...
false
2,694,196,968
https://api.github.com/repos/huggingface/datasets/issues/7298
https://github.com/huggingface/datasets/issues/7298
7,298
loading dataset issue with load_dataset() when training controlnet
open
0
2024-11-26T10:50:18
2024-11-26T10:50:18
null
sarahahtee
[]
### Describe the bug i'm unable to load my dataset for [controlnet training](https://github.com/huggingface/diffusers/blob/074e12358bc17e7dbe111ea4f62f05dbae8a49d5/examples/controlnet/train_controlnet.py#L606) using load_dataset(). however, load_from_disk() seems to work? would appreciate if someone can explain why ...
false
2,683,977,430
https://api.github.com/repos/huggingface/datasets/issues/7297
https://github.com/huggingface/datasets/issues/7297
7,297
wrong return type for `IterableDataset.shard()`
closed
1
2024-11-22T17:25:46
2024-12-03T14:27:27
2024-12-03T14:27:03
ysngshn
[]
### Describe the bug `IterableDataset.shard()` has the wrong typing for its return as `"Dataset"`. It should be `"IterableDataset"`. Makes my IDE unhappy. ### Steps to reproduce the bug look at [the source code](https://github.com/huggingface/datasets/blob/main/src/datasets/iterable_dataset.py#L2668)? ### Expected ...
false
2,675,573,974
https://api.github.com/repos/huggingface/datasets/issues/7296
https://github.com/huggingface/datasets/pull/7296
7,296
Remove upper version limit of fsspec[http]
closed
0
2024-11-20T11:29:16
2025-03-06T04:47:04
2025-03-06T04:47:01
cyyever
[]
null
true
2,672,003,384
https://api.github.com/repos/huggingface/datasets/issues/7295
https://github.com/huggingface/datasets/issues/7295
7,295
[BUG]: Streaming from S3 triggers `unexpected keyword argument 'requote_redirect_url'`
open
0
2024-11-19T12:23:36
2024-11-19T13:01:53
null
casper-hansen
[]
### Describe the bug Note that this bug is only triggered when `streaming=True`. #5459 introduced always calling fsspec with `client_kwargs={"requote_redirect_url": False}`, which seems to have incompatibility issues even in the newest versions. Analysis of what's happening: 1. `datasets` passes the `client_kw...
false
2,668,663,130
https://api.github.com/repos/huggingface/datasets/issues/7294
https://github.com/huggingface/datasets/pull/7294
7,294
Remove `aiohttp` from direct dependencies
closed
0
2024-11-18T14:00:59
2025-05-07T14:27:18
2025-05-07T14:27:17
akx
[]
The dependency is only used for catching an exception from other code. That can be done with an import guard.
true
2,664,592,054
https://api.github.com/repos/huggingface/datasets/issues/7293
https://github.com/huggingface/datasets/pull/7293
7,293
Updated inconsistent output in documentation examples for `ClassLabel`
closed
3
2024-11-16T16:20:57
2024-12-06T11:33:33
2024-12-06T11:32:01
sergiopaniego
[]
fix #7129 @stevhliu
true
2,664,250,855
https://api.github.com/repos/huggingface/datasets/issues/7292
https://github.com/huggingface/datasets/issues/7292
7,292
DataFilesNotFoundError for datasets `OpenMol/PubChemSFT`
closed
3
2024-11-16T11:54:31
2024-11-19T00:53:00
2024-11-19T00:52:59
xnuohz
[]
### Describe the bug Cannot load the dataset https://huggingface.co/datasets/OpenMol/PubChemSFT ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('OpenMol/PubChemSFT') ``` ### Expected behavior ``` -----------------------------------------------------------------------...
false
2,662,244,643
https://api.github.com/repos/huggingface/datasets/issues/7291
https://github.com/huggingface/datasets/issues/7291
7,291
Why return_tensors='pt' doesn't work?
open
2
2024-11-15T15:01:23
2024-11-18T13:47:08
null
bw-wang19
[]
### Describe the bug I tried to add input_ids to dataset with map(), and I used the return_tensors='pt', but why I got the callback with the type of List? ![image](https://github.com/user-attachments/assets/ab046e20-2174-4e91-9cd6-4a296a43e83c) ### Steps to reproduce the bug ![image](https://github.com/user-attac...
false
2,657,620,816
https://api.github.com/repos/huggingface/datasets/issues/7290
https://github.com/huggingface/datasets/issues/7290
7,290
`Dataset.save_to_disk` hangs when using num_proc > 1
open
3
2024-11-14T05:25:13
2025-06-27T00:56:47
null
JohannesAck
[]
### Describe the bug Hi, I'm encountered a small issue when saving datasets that led to the saving taking up to multiple hours. Specifically, [`Dataset.save_to_disk`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.save_to_disk) is a lot slower when using `num_proc>1` than...
false
2,648,019,507
https://api.github.com/repos/huggingface/datasets/issues/7289
https://github.com/huggingface/datasets/issues/7289
7,289
Dataset viewer displays wrong statists
closed
1
2024-11-11T03:29:27
2024-11-13T13:02:25
2024-11-13T13:02:25
speedcell4
[]
### Describe the bug In [my dataset](https://huggingface.co/datasets/speedcell4/opus-unigram2), there is a column called `lang2`, and there are 94 different classes in total, but the viewer says there are 83 values only. This issue only arises in the `train` split. The total number of values is also 94 in the `test`...
false
2,647,052,280
https://api.github.com/repos/huggingface/datasets/issues/7288
https://github.com/huggingface/datasets/pull/7288
7,288
Release v3.1.1
closed
0
2024-11-10T09:38:15
2024-11-10T09:38:48
2024-11-10T09:38:48
alex-hh
[]
null
true
2,646,958,393
https://api.github.com/repos/huggingface/datasets/issues/7287
https://github.com/huggingface/datasets/issues/7287
7,287
Support for identifier-based automated split construction
open
3
2024-11-10T07:45:19
2024-11-19T14:37:02
null
alex-hh
[ "enhancement" ]
### Feature request As far as I understand, automated construction of splits for hub datasets is currently based on either file names or directory structure ([as described here](https://huggingface.co/docs/datasets/en/repository_structure)) It would seem to be pretty useful to also allow splits to be based on ide...
false
2,645,350,151
https://api.github.com/repos/huggingface/datasets/issues/7286
https://github.com/huggingface/datasets/issues/7286
7,286
Concurrent loading in `load_from_disk` - `num_proc` as a param
closed
0
2024-11-08T23:21:40
2024-11-09T16:14:37
2024-11-09T16:14:37
unography
[ "enhancement" ]
### Feature request https://github.com/huggingface/datasets/pull/6464 mentions a `num_proc` param while loading dataset from disk, but can't find that in the documentation and code anywhere ### Motivation Make loading large datasets from disk faster ### Your contribution Happy to contribute if given pointers
false
2,644,488,598
https://api.github.com/repos/huggingface/datasets/issues/7285
https://github.com/huggingface/datasets/pull/7285
7,285
Release v3.1.0
closed
0
2024-11-08T16:17:58
2024-11-08T16:18:05
2024-11-08T16:18:05
alex-hh
[]
null
true
2,644,302,386
https://api.github.com/repos/huggingface/datasets/issues/7284
https://github.com/huggingface/datasets/pull/7284
7,284
support for custom feature encoding/decoding
closed
2
2024-11-08T15:04:08
2024-11-21T16:09:47
2024-11-21T16:09:47
alex-hh
[]
Fix for https://github.com/huggingface/datasets/issues/7220 as suggested in discussion, in preference to #7221 (only concern would be on effect on type checking with custom feature types that aren't covered by FeatureType?)
true
2,642,537,708
https://api.github.com/repos/huggingface/datasets/issues/7283
https://github.com/huggingface/datasets/pull/7283
7,283
Allow for variation in metadata file names as per issue #7123
open
0
2024-11-08T00:44:47
2024-11-08T00:44:47
null
egrace479
[]
Allow metadata files to have an identifying preface. Specifically, it will recognize files with `-metadata.csv` or `_metadata.csv` as metadata files for the purposes of the dataset viewer functionality. Resolves #7123.
true
2,642,075,491
https://api.github.com/repos/huggingface/datasets/issues/7282
https://github.com/huggingface/datasets/issues/7282
7,282
Faulty datasets.exceptions.ExpectedMoreSplitsError
open
0
2024-11-07T20:15:01
2024-11-07T20:15:42
null
meg-huggingface
[]
### Describe the bug Trying to download only the 'validation' split of my dataset; instead hit the error `datasets.exceptions.ExpectedMoreSplitsError`. Appears to be the same undesired behavior as reported in [#6939](https://github.com/huggingface/datasets/issues/6939), but with `data_files`, not `data_dir`. Her...
false
2,640,346,339
https://api.github.com/repos/huggingface/datasets/issues/7281
https://github.com/huggingface/datasets/issues/7281
7,281
File not found error
open
1
2024-11-07T09:04:49
2024-11-07T09:22:43
null
MichielBontenbal
[]
### Describe the bug I get a FileNotFoundError: <img width="944" alt="image" src="https://github.com/user-attachments/assets/1336bc08-06f6-4682-a3c0-071ff65efa87"> ### Steps to reproduce the bug See screenshot. ### Expected behavior I want to load one audiofile from the dataset. ### Environmen...
false
2,639,977,077
https://api.github.com/repos/huggingface/datasets/issues/7280
https://github.com/huggingface/datasets/issues/7280
7,280
Add filename in error message when ReadError or similar occur
open
5
2024-11-07T06:00:53
2024-11-20T13:23:12
null
elisa-aleman
[]
Please update error messages to include relevant information for debugging when loading datasets with `load_dataset()` that may have a few corrupted files. Whenever downloading a full dataset, some files might be corrupted (either at the source or from downloading corruption). However the errors often only let me k...
false
2,635,813,932
https://api.github.com/repos/huggingface/datasets/issues/7279
https://github.com/huggingface/datasets/pull/7279
7,279
Feature proposal: Stacking, potentially heterogeneous, datasets
open
0
2024-11-05T15:40:50
2024-11-05T15:40:50
null
TimCares
[]
### Introduction Hello there, I noticed that there are two ways to combine multiple datasets: Either through `datasets.concatenate_datasets` or `datasets.interleave_datasets`. However, to my knowledge (please correct me if I am wrong) both approaches require the datasets that are combined to have the same features....
true
2,633,436,151
https://api.github.com/repos/huggingface/datasets/issues/7278
https://github.com/huggingface/datasets/pull/7278
7,278
Let soundfile directly read local audio files
open
0
2024-11-04T17:41:13
2024-11-18T14:01:25
null
fawazahmed0
[]
- [x] Fixes #7276
true
2,632,459,184
https://api.github.com/repos/huggingface/datasets/issues/7277
https://github.com/huggingface/datasets/pull/7277
7,277
Add link to video dataset
closed
1
2024-11-04T10:45:12
2024-11-04T17:05:06
2024-11-04T17:05:06
NielsRogge
[]
This PR updates https://huggingface.co/docs/datasets/loading to also link to the new video loading docs. cc @mfarre
true
2,631,917,431
https://api.github.com/repos/huggingface/datasets/issues/7276
https://github.com/huggingface/datasets/issues/7276
7,276
Accessing audio dataset value throws Format not recognised error
open
3
2024-11-04T05:59:13
2024-11-09T18:51:52
null
fawazahmed0
[]
### Describe the bug Accessing audio dataset value throws `Format not recognised error` ### Steps to reproduce the bug **code:** ```py from datasets import load_dataset dataset = load_dataset("fawazahmed0/bug-audio") for data in dataset["train"]: print(data) ``` **output:** ```bash (mypy) ...
false
2,631,713,397
https://api.github.com/repos/huggingface/datasets/issues/7275
https://github.com/huggingface/datasets/issues/7275
7,275
load_dataset
open
0
2024-11-04T03:01:44
2024-11-04T03:01:44
null
santiagobp99
[]
### Describe the bug I am performing two operations I see on a hugging face tutorial (Fine-tune a language model), and I am defining every aspect inside the mapped functions, also some imports of the library because it doesnt identify anything not defined outside that function where the dataset elements are being mapp...
false
2,629,882,821
https://api.github.com/repos/huggingface/datasets/issues/7274
https://github.com/huggingface/datasets/pull/7274
7,274
[MINOR:TYPO] Fix typo in exception text
closed
0
2024-11-01T21:15:29
2025-05-21T13:17:20
2025-05-21T13:17:20
cakiki
[]
null
true
2,628,896,492
https://api.github.com/repos/huggingface/datasets/issues/7273
https://github.com/huggingface/datasets/pull/7273
7,273
Raise error for incorrect JSON serialization
closed
2
2024-11-01T11:54:35
2024-11-18T11:25:01
2024-11-18T11:25:01
varadhbhatnagar
[]
Raise error when `lines = False` and `batch_size < Dataset.num_rows` in `Dataset.to_json()`. Issue: #7037 Related PRs: #7039 #7181
true
2,627,223,390
https://api.github.com/repos/huggingface/datasets/issues/7272
https://github.com/huggingface/datasets/pull/7272
7,272
fix conda release worlflow
closed
1
2024-10-31T15:56:19
2024-10-31T15:58:35
2024-10-31T15:57:29
lhoestq
[]
null
true
2,627,135,540
https://api.github.com/repos/huggingface/datasets/issues/7271
https://github.com/huggingface/datasets/pull/7271
7,271
Set dev version
closed
1
2024-10-31T15:22:51
2024-10-31T15:25:27
2024-10-31T15:22:59
lhoestq
[]
null
true
2,627,107,016
https://api.github.com/repos/huggingface/datasets/issues/7270
https://github.com/huggingface/datasets/pull/7270
7,270
Release: 3.1.0
closed
1
2024-10-31T15:10:01
2024-10-31T15:14:23
2024-10-31T15:14:20
lhoestq
[]
null
true
2,626,873,843
https://api.github.com/repos/huggingface/datasets/issues/7269
https://github.com/huggingface/datasets/issues/7269
7,269
Memory leak when streaming
open
3
2024-10-31T13:33:52
2025-08-05T11:39:56
null
Jourdelune
[]
### Describe the bug I try to use a dataset with streaming=True, the issue I have is that the RAM usage becomes higher and higher until it is no longer sustainable. I understand that huggingface store data in ram during the streaming, and more worker in dataloader there are, more a lot of shard will be stored in ...
false
2,626,664,687
https://api.github.com/repos/huggingface/datasets/issues/7268
https://github.com/huggingface/datasets/issues/7268
7,268
load_from_disk
open
3
2024-10-31T11:51:56
2025-07-01T08:42:17
null
ghaith-mq
[]
### Describe the bug I have data saved with save_to_disk. The data is big (700Gb). When I try loading it, the only option is load_from_disk, and this function copies the data to a tmp directory, causing me to run out of disk space. Is there an alternative solution to that? ### Steps to reproduce the bug when trying ...
false
2,626,490,029
https://api.github.com/repos/huggingface/datasets/issues/7267
https://github.com/huggingface/datasets/issues/7267
7,267
Source installation fails on Macintosh with python 3.10
open
1
2024-10-31T10:18:45
2024-11-04T22:18:06
null
mayankagarwals
[]
### Describe the bug Hi, Decord is a dev dependency not maintained since couple years. It does not have an ARM package available rendering it uninstallable on non-intel based macs Suggestion is to move to eva-decord (https://github.com/georgia-tech-db/eva-decord) which doesnt have this problem. Happy to...
false
2,624,666,087
https://api.github.com/repos/huggingface/datasets/issues/7266
https://github.com/huggingface/datasets/issues/7266
7,266
The dataset viewer should be available soon. Please retry later.
closed
1
2024-10-30T16:32:00
2024-10-31T03:48:11
2024-10-31T03:48:10
viiika
[]
### Describe the bug After waiting for 2 hours, it still presents ``The dataset viewer should be available soon. Please retry later.'' ### Steps to reproduce the bug dataset link: https://huggingface.co/datasets/BryanW/HI_EDIT ### Expected behavior Present the dataset viewer. ### Environment info NA
false
2,624,090,418
https://api.github.com/repos/huggingface/datasets/issues/7265
https://github.com/huggingface/datasets/pull/7265
7,265
Disallow video push_to_hub
closed
1
2024-10-30T13:21:55
2024-10-30T13:36:05
2024-10-30T13:36:02
lhoestq
[]
null
true
2,624,047,640
https://api.github.com/repos/huggingface/datasets/issues/7264
https://github.com/huggingface/datasets/pull/7264
7,264
fix docs relative links
closed
1
2024-10-30T13:07:34
2024-10-30T13:10:13
2024-10-30T13:09:02
lhoestq
[]
null
true
2,621,844,054
https://api.github.com/repos/huggingface/datasets/issues/7263
https://github.com/huggingface/datasets/pull/7263
7,263
Small addition to video docs
closed
1
2024-10-29T16:58:37
2024-10-29T17:01:05
2024-10-29T16:59:10
lhoestq
[]
null
true
2,620,879,059
https://api.github.com/repos/huggingface/datasets/issues/7262
https://github.com/huggingface/datasets/pull/7262
7,262
Allow video with disabeld decoding without decord
closed
1
2024-10-29T10:54:04
2024-10-29T10:56:19
2024-10-29T10:55:37
lhoestq
[]
for the viewer, this way it can use Video(decode=False) and doesn't need decord (which causes segfaults)
true
2,620,510,840
https://api.github.com/repos/huggingface/datasets/issues/7261
https://github.com/huggingface/datasets/issues/7261
7,261
Cannot load the cache when mapping the dataset
open
2
2024-10-29T08:29:40
2025-03-24T13:27:55
null
zhangn77
[]
### Describe the bug I'm training the flux controlnet. The train_dataset.map() takes long time to finish. However, when I killed one training process and want to restart a new training with the same dataset. I can't reuse the mapped result even I defined the cache dir for the dataset. with accelerator.main_process_...
false
2,620,014,285
https://api.github.com/repos/huggingface/datasets/issues/7260
https://github.com/huggingface/datasets/issues/7260
7,260
cache can't cleaned or disabled
open
1
2024-10-29T03:15:28
2024-12-11T09:04:52
null
charliedream1
[]
### Describe the bug I tried following ways, the cache can't be disabled. I got 2T data, but I also got more than 2T cache file. I got pressure on storage. I need to diable cache or cleaned immediately after processed. Following ways are all not working, please give some help! ```python from datasets import ...
false
2,618,909,241
https://api.github.com/repos/huggingface/datasets/issues/7259
https://github.com/huggingface/datasets/pull/7259
7,259
Don't embed videos
closed
1
2024-10-28T16:25:10
2024-10-28T16:27:34
2024-10-28T16:26:01
lhoestq
[]
don't include video bytes when running download_and_prepare(format="parquet") this also affects push_to_hub which will just upload the local paths of the videos though
true
2,618,758,399
https://api.github.com/repos/huggingface/datasets/issues/7258
https://github.com/huggingface/datasets/pull/7258
7,258
Always set non-null writer batch size
closed
1
2024-10-28T15:26:14
2024-10-28T15:28:41
2024-10-28T15:26:29
lhoestq
[]
bug introduced in #7230, it was preventing the Viewer limit writes to work
true
2,618,602,173
https://api.github.com/repos/huggingface/datasets/issues/7257
https://github.com/huggingface/datasets/pull/7257
7,257
fix ci for pyarrow 18
closed
1
2024-10-28T14:31:34
2024-10-28T14:34:05
2024-10-28T14:31:44
lhoestq
[]
null
true
2,618,580,188
https://api.github.com/repos/huggingface/datasets/issues/7256
https://github.com/huggingface/datasets/pull/7256
7,256
Retry all requests timeouts
closed
1
2024-10-28T14:23:16
2024-10-28T14:56:28
2024-10-28T14:56:26
lhoestq
[]
as reported in https://github.com/huggingface/datasets/issues/6843
true
2,618,540,355
https://api.github.com/repos/huggingface/datasets/issues/7255
https://github.com/huggingface/datasets/pull/7255
7,255
fix decord import
closed
1
2024-10-28T14:08:19
2024-10-28T14:10:43
2024-10-28T14:09:14
lhoestq
[]
delay the import until Video() is instantiated + also import duckdb first (otherwise importing duckdb later causes a segfault)
true
2,616,174,996
https://api.github.com/repos/huggingface/datasets/issues/7254
https://github.com/huggingface/datasets/issues/7254
7,254
mismatch for datatypes when providing `Features` with `Array2D` and user specified `dtype` and using with_format("numpy")
open
1
2024-10-26T22:06:27
2024-10-26T22:07:37
null
Akhil-CM
[]
### Describe the bug If the user provides a `Features` type value to `datasets.Dataset` with members having `Array2D` with a value for `dtype`, it is not respected during `with_format("numpy")` which should return a `np.array` with `dtype` that the user provided for `Array2D`. It seems for floats, it will be set to `f...
false
2,615,862,202
https://api.github.com/repos/huggingface/datasets/issues/7253
https://github.com/huggingface/datasets/issues/7253
7,253
Unable to upload a large dataset zip either from command line or UI
open
0
2024-10-26T13:17:06
2024-10-26T13:17:06
null
vakyansh
[]
### Describe the bug Unable to upload a large dataset zip from command line or UI. UI simply says error. I am trying to a upload a tar.gz file of 17GB. <img width="550" alt="image" src="https://github.com/user-attachments/assets/f9d29024-06c8-49c4-a109-0492cff79d34"> <img width="755" alt="image" src="https://githu...
false
2,613,795,544
https://api.github.com/repos/huggingface/datasets/issues/7252
https://github.com/huggingface/datasets/pull/7252
7,252
Add IterableDataset.shard()
closed
2
2024-10-25T11:07:12
2025-03-21T03:58:43
2024-10-25T15:45:22
lhoestq
[]
Will be useful to distribute a dataset across workers (other than pytorch) like spark I also renamed `.n_shards` -> `.num_shards` for consistency and kept the old name for backward compatibility. And a few changes in internal functions for consistency as well (rank, world_size -> num_shards, index) Breaking chang...
true
2,612,097,435
https://api.github.com/repos/huggingface/datasets/issues/7251
https://github.com/huggingface/datasets/pull/7251
7,251
Missing video docs
closed
1
2024-10-24T16:45:12
2024-10-24T16:48:29
2024-10-24T16:48:27
lhoestq
[]
null
true
2,612,041,969
https://api.github.com/repos/huggingface/datasets/issues/7250
https://github.com/huggingface/datasets/pull/7250
7,250
Basic XML support (mostly copy pasted from text)
closed
1
2024-10-24T16:14:50
2024-10-24T16:19:18
2024-10-24T16:19:16
lhoestq
[]
enable the viewer for datasets like https://huggingface.co/datasets/FrancophonIA/e-calm (there will be more and more apparently)
true
2,610,136,636
https://api.github.com/repos/huggingface/datasets/issues/7249
https://github.com/huggingface/datasets/issues/7249
7,249
How to debugging
open
0
2024-10-24T01:03:51
2024-10-24T01:03:51
null
ShDdu
[]
### Describe the bug I wanted to use my own script to handle the processing, and followed the tutorial documentation by rewriting the MyDatasetConfig and MyDatasetBuilder (which contains the _info,_split_generators and _generate_examples methods) classes. Testing with simple data was able to output the results of the ...
false
2,609,926,089
https://api.github.com/repos/huggingface/datasets/issues/7248
https://github.com/huggingface/datasets/issues/7248
7,248
ModuleNotFoundError: No module named 'datasets.tasks'
open
2
2024-10-23T21:58:25
2024-10-24T17:00:19
null
shoowadoo
[]
### Describe the bug --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) [<ipython-input-9-13b5f31bd391>](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_R...
false
2,606,230,029
https://api.github.com/repos/huggingface/datasets/issues/7247
https://github.com/huggingface/datasets/issues/7247
7,247
Adding column with dict struction when mapping lead to wrong order
open
0
2024-10-22T18:55:11
2024-10-22T18:55:23
null
chchch0109
[]
### Describe the bug in `map()` function, I want to add a new column with a dict structure. ``` def map_fn(example): example['text'] = {'user': ..., 'assistant': ...} return example ``` However this leads to a wrong order `{'assistant':..., 'user':...}` in the dataset. Thus I can't concatenate two datasets ...
false
2,605,734,447
https://api.github.com/repos/huggingface/datasets/issues/7246
https://github.com/huggingface/datasets/pull/7246
7,246
Set dev version
closed
1
2024-10-22T15:04:47
2024-10-22T15:07:31
2024-10-22T15:04:58
lhoestq
[]
null
true
2,605,701,235
https://api.github.com/repos/huggingface/datasets/issues/7245
https://github.com/huggingface/datasets/pull/7245
7,245
Release: 3.0.2
closed
1
2024-10-22T14:53:34
2024-10-22T15:01:50
2024-10-22T15:01:47
lhoestq
[]
null
true
2,605,461,515
https://api.github.com/repos/huggingface/datasets/issues/7244
https://github.com/huggingface/datasets/pull/7244
7,244
use huggingface_hub offline mode
closed
1
2024-10-22T13:27:16
2024-10-22T14:10:45
2024-10-22T14:10:20
lhoestq
[]
and better handling of LocalEntryNotfoundError cc @Wauplin follow up to #7234
true
2,602,853,172
https://api.github.com/repos/huggingface/datasets/issues/7243
https://github.com/huggingface/datasets/issues/7243
7,243
ArrayXD with None as leading dim incompatible with DatasetCardData
open
5
2024-10-21T15:08:13
2024-10-22T14:18:10
null
alex-hh
[]
### Describe the bug Creating a dataset with ArrayXD features leads to errors when downloading from hub due to DatasetCardData removing the Nones @lhoestq ### Steps to reproduce the bug ```python import numpy as np from datasets import Array2D, Dataset, Features, load_dataset def examples_generator():...
false
2,599,899,156
https://api.github.com/repos/huggingface/datasets/issues/7241
https://github.com/huggingface/datasets/issues/7241
7,241
`push_to_hub` overwrite argument
closed
9
2024-10-20T03:23:26
2024-10-24T17:39:08
2024-10-24T17:39:08
ceferisbarov
[ "enhancement" ]
### Feature request Add an `overwrite` argument to the `push_to_hub` method. ### Motivation I want to overwrite a repo without deleting it on Hugging Face. Is this possible? I couldn't find anything in the documentation or tutorials. ### Your contribution I can create a PR.
false
2,598,980,027
https://api.github.com/repos/huggingface/datasets/issues/7240
https://github.com/huggingface/datasets/pull/7240
7,240
Feature Request: Add functionality to pass split types like train, test in DatasetDict.map
closed
0
2024-10-19T09:59:12
2025-01-06T08:04:08
2025-01-06T08:04:08
jp1924
[]
Hello datasets! We often encounter situations where we need to preprocess data differently depending on split types such as train, valid, and test. However, while DatasetDict.map has features to pass rank or index, there's no functionality to pass split types. Therefore, I propose adding a 'with_splits' parame...
true
2,598,409,993
https://api.github.com/repos/huggingface/datasets/issues/7238
https://github.com/huggingface/datasets/issues/7238
7,238
incompatibily issue when using load_dataset with datasets==3.0.1
open
2
2024-10-18T21:25:23
2024-12-09T09:49:32
null
jupiterMJM
[]
### Describe the bug There is a bug when using load_dataset with dataset version at 3.0.1 . Please see below in the "steps to reproduce the bug". To resolve the bug, I had to downgrade to version 2.21.0 OS: Ubuntu 24 (AWS instance) Python: same bug under 3.12 and 3.10 The error I had was: Traceback (most rec...
false
2,597,358,525
https://api.github.com/repos/huggingface/datasets/issues/7236
https://github.com/huggingface/datasets/pull/7236
7,236
[MINOR:TYPO] Update arrow_dataset.py
closed
0
2024-10-18T12:10:03
2024-10-24T15:06:43
2024-10-24T15:06:43
cakiki
[]
Fix wrong link. csv kwargs docstring link was pointing to pandas json docs.
true
2,594,220,624
https://api.github.com/repos/huggingface/datasets/issues/7234
https://github.com/huggingface/datasets/pull/7234
7,234
No need for dataset_info
closed
2
2024-10-17T09:54:03
2024-10-22T12:30:40
2024-10-21T16:44:34
lhoestq
[]
save a useless call to /api/datasets/repo_id
true
2,593,903,113
https://api.github.com/repos/huggingface/datasets/issues/7233
https://github.com/huggingface/datasets/issues/7233
7,233
数据集数量问题
open
0
2024-10-17T07:41:44
2024-10-17T07:41:44
null
want-well
[]
### Describe the bug 这里我进行大模型微调,当数据集数量为718时,模型可以正常微调,但是当我添加一个在前718个数据集中的数据或者新增一个数据就会报错 ### Steps to reproduce the bug 1. 这里我的数据集可以微调的最后两个数据集是: { "messages": [ { "role": "user", "content": "完成校正装置设计后需要进行哪些工作?" }, { "role": "assistant", "content": "一旦完成校正装置设计后,需要进行系统实际调校...
false
2,593,720,548
https://api.github.com/repos/huggingface/datasets/issues/7232
https://github.com/huggingface/datasets/pull/7232
7,232
(Super tiny doc update) Mention to_polars
closed
1
2024-10-17T06:08:53
2024-10-24T23:11:05
2024-10-24T15:06:16
fzyzcjy
[]
polars is also quite popular now, thus this tiny update can tell users polars is supported
true
2,592,011,737
https://api.github.com/repos/huggingface/datasets/issues/7231
https://github.com/huggingface/datasets/pull/7231
7,231
Fix typo in image dataset docs
closed
1
2024-10-16T14:05:46
2024-10-16T17:06:21
2024-10-16T17:06:19
albertvillanova
[]
Fix typo in image dataset docs. Typo reported by @datavistics.
true
2,589,531,942
https://api.github.com/repos/huggingface/datasets/issues/7230
https://github.com/huggingface/datasets/pull/7230
7,230
Video support
closed
1
2024-10-15T18:17:29
2024-10-24T16:39:51
2024-10-24T16:39:50
lhoestq
[]
(wip and experimental) adding the `Video` type based on `VideoReader` from `decord` ```python >>>from datasets import load_dataset >>> ds = load_dataset("path/to/videos", split="train").with_format("torch") >>> print(ds[0]["video"]) <decord.video_reader.VideoReader object at 0x337a47910> >>> print(ds[0]["vid...
true
2,588,847,398
https://api.github.com/repos/huggingface/datasets/issues/7229
https://github.com/huggingface/datasets/pull/7229
7,229
handle config_name=None in push_to_hub
closed
1
2024-10-15T13:48:57
2024-10-24T17:51:52
2024-10-24T17:51:52
alex-hh
[]
This caught me out - thought it might be better to explicitly handle None?
true
2,587,310,094
https://api.github.com/repos/huggingface/datasets/issues/7228
https://github.com/huggingface/datasets/issues/7228
7,228
Composite (multi-column) features
open
0
2024-10-14T23:59:19
2024-10-15T11:17:15
null
alex-hh
[ "enhancement" ]
### Feature request Structured data types (graphs etc.) might often be most efficiently stored as multiple columns, which then need to be combined during feature decoding Although it is currently possible to nest features as structs, my impression is that in particular when dealing with e.g. a feature composed of...
false
2,587,048,312
https://api.github.com/repos/huggingface/datasets/issues/7227
https://github.com/huggingface/datasets/pull/7227
7,227
fast array extraction
open
4
2024-10-14T20:51:32
2025-01-28T09:39:26
null
alex-hh
[]
Implements #7210 using method suggested in https://github.com/huggingface/datasets/pull/7207#issuecomment-2411789307 ```python import numpy as np from datasets import Dataset, Features, Array3D features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float3...
true
2,586,920,351
https://api.github.com/repos/huggingface/datasets/issues/7226
https://github.com/huggingface/datasets/issues/7226
7,226
Add R as a How to use from the Polars (R) Library as an option
open
0
2024-10-14T19:56:07
2024-10-14T19:57:13
null
ran-codes
[ "enhancement" ]
### Feature request The boiler plate code to access a dataset via the hugging face file system is very useful. Please addd ## Add Polars (R) option The equivailent code works, because the [Polars-R](https://github.com/pola-rs/r-polars) wrapper has hugging faces funcitonaliy as well. ```r library(polars) ...
false
2,586,229,216
https://api.github.com/repos/huggingface/datasets/issues/7225
https://github.com/huggingface/datasets/issues/7225
7,225
Huggingface GIT returns null as Content-Type instead of application/x-git-receive-pack-result
open
0
2024-10-14T14:33:06
2024-10-14T14:33:06
null
padmalcom
[]
### Describe the bug We push changes to our datasets programmatically. Our git client jGit reports that the hf git server returns null as Content-Type after a push. ### Steps to reproduce the bug A basic kotlin application: ``` val person = PersonIdent( "padmalcom", "padmalcom@sth.com" ) ...
false
2,583,233,980
https://api.github.com/repos/huggingface/datasets/issues/7224
https://github.com/huggingface/datasets/pull/7224
7,224
fallback to default feature casting in case custom features not available during dataset loading
open
0
2024-10-12T16:13:56
2024-10-12T16:13:56
null
alex-hh
[]
a fix for #7223 in case datasets is happy to support this kind of extensibility! seems cool / powerful for allowing sharing of datasets with potentially different feature types
true
2,583,231,590
https://api.github.com/repos/huggingface/datasets/issues/7223
https://github.com/huggingface/datasets/issues/7223
7,223
Fallback to arrow defaults when loading dataset with custom features that aren't registered locally
open
0
2024-10-12T16:08:20
2024-10-12T16:08:20
null
alex-hh
[]
### Describe the bug Datasets allows users to create and register custom features. However if datasets are then pushed to the hub, this means that anyone calling load_dataset without registering the custom Features in the same way as the dataset creator will get an error message. It would be nice to offer a fall...
false
2,582,678,033
https://api.github.com/repos/huggingface/datasets/issues/7222
https://github.com/huggingface/datasets/issues/7222
7,222
TypeError: Couldn't cast array of type string to null in long json
open
6
2024-10-12T08:14:59
2025-07-21T03:07:32
null
nokados
[]
### Describe the bug In general, changing the type from string to null is allowed within a dataset — there are even examples of this in the documentation. However, if the dataset is large and unevenly distributed, this allowance stops working. The schema gets locked in after reading a chunk. Consequently, if al...
false
2,582,114,631
https://api.github.com/repos/huggingface/datasets/issues/7221
https://github.com/huggingface/datasets/pull/7221
7,221
add CustomFeature base class to support user-defined features with encoding/decoding logic
closed
2
2024-10-11T20:10:27
2025-01-28T09:40:29
2025-01-28T09:40:29
alex-hh
[]
intended as fix for #7220 if this kind of extensibility is something that datasets is willing to support! ```python from datasets.features.features import CustomFeature class ListOfStrs(CustomFeature): requires_encoding = True def _encode_example(self, value): if isinstance(value, str): ...
true
2,582,036,110
https://api.github.com/repos/huggingface/datasets/issues/7220
https://github.com/huggingface/datasets/issues/7220
7,220
Custom features not compatible with special encoding/decoding logic
open
2
2024-10-11T19:20:11
2024-11-08T15:10:58
null
alex-hh
[]
### Describe the bug It is possible to register custom features using datasets.features.features.register_feature (https://github.com/huggingface/datasets/pull/6727) However such features are not compatible with Features.encode_example/decode_example if they require special encoding / decoding logic because encod...
false
2,581,708,084
https://api.github.com/repos/huggingface/datasets/issues/7219
https://github.com/huggingface/datasets/pull/7219
7,219
bump fsspec
closed
1
2024-10-11T15:56:36
2024-10-14T08:21:56
2024-10-14T08:21:55
lhoestq
[]
null
true
2,581,095,098
https://api.github.com/repos/huggingface/datasets/issues/7217
https://github.com/huggingface/datasets/issues/7217
7,217
ds.map(f, num_proc=10) is slower than df.apply
open
3
2024-10-11T11:04:05
2025-02-28T21:21:01
null
lanlanlanlanlanlan365
[]
### Describe the bug pandas columns: song_id, song_name ds = Dataset.from_pandas(df) def has_cover(song_name): if song_name is None or pd.isna(song_name): return False return 'cover' in song_name.lower() df['has_cover'] = df.song_name.progress_apply(has_cover) ds = ds.map(lambda x: {'has_cov...
false
2,579,942,939
https://api.github.com/repos/huggingface/datasets/issues/7215
https://github.com/huggingface/datasets/issues/7215
7,215
Iterable dataset map with explicit features causes slowdown for Sequence features
open
0
2024-10-10T22:08:20
2024-10-10T22:10:32
null
alex-hh
[]
### Describe the bug When performing map, it's nice to be able to pass the new feature type, and indeed required by interleave and concatenate datasets. However, this can cause a major slowdown for certain types of array features due to the features being re-encoded. This is separate to the slowdown reported i...
false
2,578,743,713
https://api.github.com/repos/huggingface/datasets/issues/7214
https://github.com/huggingface/datasets/issues/7214
7,214
Formatted map + with_format(None) changes array dtype for iterable datasets
open
1
2024-10-10T12:45:16
2024-10-12T16:55:57
null
alex-hh
[]
### Describe the bug When applying with_format -> map -> with_format(None), array dtypes seem to change, even if features are passed ### Steps to reproduce the bug ```python features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32")}) dataset = Dataset.from_dict({f"array0": [np.zeros((100,10,10...
false
2,578,675,565
https://api.github.com/repos/huggingface/datasets/issues/7213
https://github.com/huggingface/datasets/issues/7213
7,213
Add with_rank to Dataset.from_generator
open
0
2024-10-10T12:15:29
2024-10-10T12:17:11
null
muthissar
[ "enhancement" ]
### Feature request Add `with_rank` to `Dataset.from_generator` similar to `Dataset.map` and `Dataset.filter`. ### Motivation As for `Dataset.map` and `Dataset.filter`, this is useful when creating cache files using multi-GPU, where the rank can be used to select GPU IDs. For now, rank can be added in the `ge...
false
2,578,641,259
https://api.github.com/repos/huggingface/datasets/issues/7212
https://github.com/huggingface/datasets/issues/7212
7,212
Windows do not supprot signal.alarm and singal.signal
open
0
2024-10-10T12:00:19
2024-10-10T12:00:19
null
TomasJavurek
[]
### Describe the bug signal.alarm and signal.signal are used in the load.py module, but these are not supported by Windows. ### Steps to reproduce the bug lighteval accelerate --model_args "pretrained=gpt2,trust_remote_code=True" --tasks "community|kinit_sts" --custom_tasks "community_tasks/kinit_evals.py" --output...
false
2,576,400,502
https://api.github.com/repos/huggingface/datasets/issues/7211
https://github.com/huggingface/datasets/issues/7211
7,211
Describe only selected fields in README
open
0
2024-10-09T16:25:47
2024-10-09T16:25:47
null
alozowski
[ "enhancement" ]
### Feature request Hi Datasets team! Is it possible to add the ability to describe only selected fields of the dataset files in `README.md`? For example, I have this open dataset ([open-llm-leaderboard/results](https://huggingface.co/datasets/open-llm-leaderboard/results?row=0)) and I want to describe only some f...
false
2,575,883,939
https://api.github.com/repos/huggingface/datasets/issues/7210
https://github.com/huggingface/datasets/issues/7210
7,210
Convert Array features to numpy arrays rather than lists by default
open
0
2024-10-09T13:05:21
2024-10-09T13:05:21
null
alex-hh
[ "enhancement" ]
### Feature request It is currently quite easy to cause massive slowdowns when using datasets and not familiar with the underlying data conversions by e.g. making bad choices of formatting. Would it be more user-friendly to set defaults that avoid this as much as possible? e.g. format Array features as numpy arrays...
false
2,575,526,651
https://api.github.com/repos/huggingface/datasets/issues/7209
https://github.com/huggingface/datasets/pull/7209
7,209
Preserve features in iterable dataset.filter
closed
3
2024-10-09T10:42:05
2024-10-16T11:27:22
2024-10-09T16:04:07
alex-hh
[]
Fixes example in #7208 - I'm not sure what other checks I should do? @lhoestq I also haven't thought hard about the concatenate / interleaving example iterables but think this might work assuming that features are either all identical or None?
true
2,575,484,256
https://api.github.com/repos/huggingface/datasets/issues/7208
https://github.com/huggingface/datasets/issues/7208
7,208
Iterable dataset.filter should not override features
closed
1
2024-10-09T10:23:45
2024-10-09T16:08:46
2024-10-09T16:08:45
alex-hh
[]
### Describe the bug When calling filter on an iterable dataset, the features get set to None ### Steps to reproduce the bug import numpy as np import time from datasets import Dataset, Features, Array3D ```python features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,...
false
2,573,582,335
https://api.github.com/repos/huggingface/datasets/issues/7207
https://github.com/huggingface/datasets/pull/7207
7,207
apply formatting after iter_arrow to speed up format -> map, filter for iterable datasets
closed
17
2024-10-08T15:44:53
2025-01-14T18:36:03
2025-01-14T16:59:30
alex-hh
[]
I got to this by hacking around a bit but it seems to solve #7206 I have no idea if this approach makes sense or would break something else? Could maybe work on a full pr if this looks reasonable @lhoestq ? I imagine the same issue might affect other iterable dataset methods?
true
2,573,567,467
https://api.github.com/repos/huggingface/datasets/issues/7206
https://github.com/huggingface/datasets/issues/7206
7,206
Slow iteration for iterable dataset with numpy formatting for array data
open
1
2024-10-08T15:38:11
2024-10-17T17:14:52
null
alex-hh
[]
### Describe the bug When working with large arrays, setting with_format to e.g. numpy then applying map causes a significant slowdown for iterable datasets. ### Steps to reproduce the bug ```python import numpy as np import time from datasets import Dataset, Features, Array3D features=Features(**{"array...
false
2,573,490,859
https://api.github.com/repos/huggingface/datasets/issues/7205
https://github.com/huggingface/datasets/pull/7205
7,205
fix ci benchmark
closed
1
2024-10-08T15:06:18
2024-10-08T15:25:28
2024-10-08T15:25:25
lhoestq
[]
we're not using the benchmarks anymore + they were not working anyway due to token permissions I keep the code in case we ever want to re-run the benchmark manually
true
2,573,289,063
https://api.github.com/repos/huggingface/datasets/issues/7204
https://github.com/huggingface/datasets/pull/7204
7,204
fix unbatched arrow map for iterable datasets
closed
1
2024-10-08T13:54:09
2024-10-08T14:19:47
2024-10-08T14:19:47
alex-hh
[]
Fixes the bug when applying map to an arrow-formatted iterable dataset described here: https://github.com/huggingface/datasets/issues/6833#issuecomment-2399903885 ```python from datasets import load_dataset ds = load_dataset("rotten_tomatoes", split="train", streaming=True) ds = ds.with_format("arrow").map(l...
true
2,573,154,222
https://api.github.com/repos/huggingface/datasets/issues/7203
https://github.com/huggingface/datasets/pull/7203
7,203
with_format docstring
closed
1
2024-10-08T13:05:19
2024-10-08T13:13:12
2024-10-08T13:13:05
lhoestq
[]
reported at https://github.com/huggingface/datasets/issues/3444
true
2,572,583,798
https://api.github.com/repos/huggingface/datasets/issues/7202
https://github.com/huggingface/datasets/issues/7202
7,202
`from_parquet` return type annotation
open
0
2024-10-08T09:08:10
2024-10-08T09:08:10
null
saiden89
[]
### Describe the bug As already posted in https://github.com/microsoft/pylance-release/issues/6534, the correct type hinting fails when building a dataset using the `from_parquet` constructor. Their suggestion is to comprehensively annotate the method's return type to better align with the docstring information. ###...
false
2,569,837,015
https://api.github.com/repos/huggingface/datasets/issues/7201
https://github.com/huggingface/datasets/issues/7201
7,201
`load_dataset()` of images from a single directory where `train.png` image exists
open
0
2024-10-07T09:14:17
2024-10-07T09:14:17
null
SagiPolaczek
[]
### Describe the bug Hey! Firstly, thanks for maintaining such framework! I had a small issue, where I wanted to load a custom dataset of image+text captioning. I had all of my images in a single directory, and one of the images had the name `train.png`. Then, the loaded dataset had only this image. I guess it'...
false
2,567,921,694
https://api.github.com/repos/huggingface/datasets/issues/7200
https://github.com/huggingface/datasets/pull/7200
7,200
Fix the environment variable for huggingface cache
closed
4
2024-10-05T11:54:35
2024-10-30T23:10:27
2024-10-08T15:45:18
torotoki
[]
Resolve #6256. As far as I tested, `HF_DATASETS_CACHE` was ignored and I could not specify the cache directory at all except for the default one by this environment variable. `HF_HOME` has worked. Perhaps the recent change on file downloading by `huggingface_hub` could affect this bug. In my testing, I could not sp...
true