id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,997,666,366 | https://api.github.com/repos/huggingface/datasets/issues/7521 | https://github.com/huggingface/datasets/pull/7521 | 7,521 | fix: Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames (#7517) | closed | 2 | 2025-04-15T21:23:58 | 2025-05-07T14:17:29 | 2025-05-07T14:17:29 | giraffacarp | [] | ## Task
Support bytes-like objects (bytes and bytearray) in Features classes
### Description
The `Features` classes only accept `bytes` objects for binary data, but not `bytearray`. This leads to errors when using `IterableDataset.from_spark()` with Spark DataFrames as they contain `bytearray` objects, even though... | true |
2,997,422,044 | https://api.github.com/repos/huggingface/datasets/issues/7520 | https://github.com/huggingface/datasets/issues/7520 | 7,520 | Update items in the dataset without `map` | open | 1 | 2025-04-15T19:39:01 | 2025-04-19T18:47:46 | null | mashdragon | [
"enhancement"
] | ### Feature request
I would like to be able to update items in my dataset without affecting all rows. At least if there was a range option, I would be able to process those items, save the dataset, and then continue.
If I am supposed to split the dataset first, that is not clear, since the docs suggest that any of th... | false |
2,996,458,961 | https://api.github.com/repos/huggingface/datasets/issues/7519 | https://github.com/huggingface/datasets/pull/7519 | 7,519 | pdf docs fixes | closed | 1 | 2025-04-15T13:35:56 | 2025-04-15T13:38:31 | 2025-04-15T13:36:03 | lhoestq | [] | close https://github.com/huggingface/datasets/issues/7494 | true |
2,996,141,825 | https://api.github.com/repos/huggingface/datasets/issues/7518 | https://github.com/huggingface/datasets/issues/7518 | 7,518 | num_proc parallelization works only for first ~10s. | open | 2 | 2025-04-15T11:44:03 | 2025-04-15T13:12:13 | null | pshishodiaa | [] | ### Describe the bug
When I try to load an already downloaded dataset with num_proc=64, the speed is very high for the first 10-20 seconds acheiving 30-40K samples / s, and 100% utilization for all cores but it soon drops to <= 1000 with almost 0% utilization for most cores.
### Steps to reproduce the bug
```
// do... | false |
2,996,106,077 | https://api.github.com/repos/huggingface/datasets/issues/7517 | https://github.com/huggingface/datasets/issues/7517 | 7,517 | Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames | closed | 4 | 2025-04-15T11:29:17 | 2025-05-07T14:17:30 | 2025-05-07T14:17:30 | giraffacarp | [] | ### Describe the bug
When using `IterableDataset.from_spark()` with a Spark DataFrame containing image data, the `Image` feature class fails to properly process this data type, causing an `AttributeError: 'bytearray' object has no attribute 'get'`
### Steps to reproduce the bug
1. Create a Spark DataFrame with a col... | false |
2,995,780,283 | https://api.github.com/repos/huggingface/datasets/issues/7516 | https://github.com/huggingface/datasets/issues/7516 | 7,516 | unsloth/DeepSeek-R1-Distill-Qwen-32B server error | closed | 0 | 2025-04-15T09:26:53 | 2025-04-15T09:57:26 | 2025-04-15T09:57:26 | Editor-1 | [] | ### Describe the bug
hfhubhttperror: 500 server error: internal server error for url: https://huggingface.co/api/models/unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit/commits/main (request id: root=1-67fe23fa-3a2150eb444c2a823c388579;de3aed68-c397-4da5-94d4-6565efd3b919) internal error - we're working hard to fix this ... | false |
2,995,082,418 | https://api.github.com/repos/huggingface/datasets/issues/7515 | https://github.com/huggingface/datasets/issues/7515 | 7,515 | `concatenate_datasets` does not preserve Pytorch format for IterableDataset | closed | 2 | 2025-04-15T04:36:34 | 2025-05-19T15:07:38 | 2025-05-19T15:07:38 | francescorubbo | [] | ### Describe the bug
When concatenating datasets with `concatenate_datasets`, I would expect the resulting combined dataset to be in the same format as the inputs (assuming it's consistent). This is indeed the behavior when combining `Dataset`, but not when combining `IterableDataset`. Specifically, when applying `con... | false |
2,994,714,923 | https://api.github.com/repos/huggingface/datasets/issues/7514 | https://github.com/huggingface/datasets/pull/7514 | 7,514 | Do not hash `generator` in `BuilderConfig.create_config_id` | closed | 0 | 2025-04-15T01:26:43 | 2025-04-23T11:55:55 | 2025-04-15T16:27:51 | simonreise | [] | `Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including generator function itself. `BuilderConfig.create_config_id` function tries to hash all the args, and hashing a `generator` can take a large amount of time or even cause MemoryError if the dataset processed in a ... | true |
2,994,678,437 | https://api.github.com/repos/huggingface/datasets/issues/7513 | https://github.com/huggingface/datasets/issues/7513 | 7,513 | MemoryError while creating dataset from generator | open | 4 | 2025-04-15T01:02:02 | 2025-04-23T19:37:08 | null | simonreise | [] | ### Describe the bug
# TL:DR
`Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including `generator` function itself. `BuilderConfig.create_config_id` function tries to hash all the args, which can take a large amount of time or even cause MemoryError if the dataset pr... | false |
2,994,043,544 | https://api.github.com/repos/huggingface/datasets/issues/7512 | https://github.com/huggingface/datasets/issues/7512 | 7,512 | .map() fails if function uses pyvista | open | 1 | 2025-04-14T19:43:02 | 2025-04-14T20:01:53 | null | el-hult | [] | ### Describe the bug
Using PyVista inside a .map() produces a crash with `objc[78796]: +[NSResponder initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to ... | false |
2,992,131,117 | https://api.github.com/repos/huggingface/datasets/issues/7510 | https://github.com/huggingface/datasets/issues/7510 | 7,510 | Incompatibile dill version (0.3.9) in datasets 2.18.0 - 3.5.0 | open | 6 | 2025-04-14T07:22:44 | 2025-05-19T14:54:04 | null | JGrel | [] | ### Describe the bug
Datasets 2.18.0 - 3.5.0 has a dependency on dill < 0.3.9. This causes errors with dill >= 0.3.9.
Could you please take a look into it and make it compatible?
### Steps to reproduce the bug
1. Install setuptools >= 2.18.0
2. Install dill >=0.3.9
3. Run pip check
4. Output:
ERROR: pip's dependenc... | false |
2,991,484,542 | https://api.github.com/repos/huggingface/datasets/issues/7509 | https://github.com/huggingface/datasets/issues/7509 | 7,509 | Dataset uses excessive memory when loading files | open | 12 | 2025-04-13T21:09:49 | 2025-04-28T15:18:55 | null | avishaiElmakies | [] | ### Describe the bug
Hi
I am having an issue when loading a dataset.
I have about 200 json files each about 1GB (total about 215GB). each row has a few features which are a list of ints.
I am trying to load the dataset using `load_dataset`.
The dataset is about 1.5M samples
I use `num_proc=32` and a node with 378GB of... | false |
2,986,612,934 | https://api.github.com/repos/huggingface/datasets/issues/7508 | https://github.com/huggingface/datasets/issues/7508 | 7,508 | Iterating over Image feature columns is extremely slow | open | 2 | 2025-04-10T19:00:54 | 2025-04-15T17:57:08 | null | sohamparikh | [] | We are trying to load datasets where the image column stores `PIL.PngImagePlugin.PngImageFile` images. However, iterating over these datasets is extremely slow.
What I have found:
1. It is the presence of the image column that causes the slowdown. Removing the column from the dataset results in blazingly fast (as expe... | false |
2,984,309,806 | https://api.github.com/repos/huggingface/datasets/issues/7507 | https://github.com/huggingface/datasets/issues/7507 | 7,507 | Front-end statistical data quantity deviation | open | 1 | 2025-04-10T02:51:38 | 2025-04-15T12:54:51 | null | rangehow | [] | ### Describe the bug
While browsing the dataset at https://huggingface.co/datasets/NeuML/wikipedia-20250123, I noticed that a dataset with nearly 7M entries was estimated to be only 4M in size—almost half the actual amount. According to the post-download loading and the dataset_info (https://huggingface.co/datasets/Ne... | false |
2,981,687,450 | https://api.github.com/repos/huggingface/datasets/issues/7506 | https://github.com/huggingface/datasets/issues/7506 | 7,506 | HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access Fineweb-10BT on 4A100 GPUs using SLURM | open | 2 | 2025-04-09T06:32:04 | 2025-06-29T06:04:59 | null | calvintanama | [] | ### Describe the bug
I am trying to run some finetunings on 4 A100 GPUs using SLURM using axolotl training framework which in turn uses Huggingface's Trainer and Accelerate on [Fineweb-10BT](https://huggingface.co/datasets/HuggingFaceFW/fineweb), but I end up running into 429 Client Error: Too Many Requests for URL er... | false |
2,979,926,156 | https://api.github.com/repos/huggingface/datasets/issues/7505 | https://github.com/huggingface/datasets/issues/7505 | 7,505 | HfHubHTTPError: 403 Forbidden: None. Cannot access content at: https://hf.co/api/s3proxy | open | 0 | 2025-04-08T14:08:40 | 2025-04-08T14:08:40 | null | hissain | [] | I have already logged in Huggingface using CLI with my valid token. Now trying to download the datasets using following code:
from transformers import WhisperProcessor, WhisperForConditionalGeneration, WhisperTokenizer, Trainer, TrainingArguments, DataCollatorForSeq2Seq
from datasets import load_dataset, Data... | false |
2,979,410,641 | https://api.github.com/repos/huggingface/datasets/issues/7504 | https://github.com/huggingface/datasets/issues/7504 | 7,504 | BuilderConfig ParquetConfig(...) doesn't have a 'use_auth_token' key. | open | 3 | 2025-04-08T10:55:03 | 2025-06-28T09:18:09 | null | tteguayco | [] | ### Describe the bug
Trying to run the following fine-tuning script (based on this page [here](https://github.com/huggingface/instruction-tuned-sd)):
```
! accelerate launch /content/instruction-tuned-sd/finetune_instruct_pix2pix.py \
--pretrained_model_name_or_path=${MODEL_ID} \
--dataset_name=${DATASET_NAME... | false |
2,978,512,625 | https://api.github.com/repos/huggingface/datasets/issues/7503 | https://github.com/huggingface/datasets/issues/7503 | 7,503 | Inconsistency between load_dataset and load_from_disk functionality | open | 2 | 2025-04-08T03:46:22 | 2025-06-28T08:51:16 | null | zzzzzec | [] | ## Issue Description
I've encountered confusion when using `load_dataset` and `load_from_disk` in the datasets library. Specifically, when working offline with the gsm8k dataset, I can load it using a local path:
```python
import datasets
ds = datasets.load_dataset('/root/xxx/datasets/gsm8k', 'main')
```
output:
```t... | false |
2,977,453,814 | https://api.github.com/repos/huggingface/datasets/issues/7502 | https://github.com/huggingface/datasets/issues/7502 | 7,502 | `load_dataset` of size 40GB creates a cache of >720GB | closed | 2 | 2025-04-07T16:52:34 | 2025-04-15T15:22:12 | 2025-04-15T15:22:11 | pietrolesci | [] | Hi there,
I am trying to load a dataset from the Hugging Face Hub and split it into train and validation splits. Somehow, when I try to do it with `load_dataset`, it exhausts my disk quota. So, I tried manually downloading the parquet files from the hub and loading them as follows:
```python
ds = DatasetDict(
... | false |
2,976,721,014 | https://api.github.com/repos/huggingface/datasets/issues/7501 | https://github.com/huggingface/datasets/issues/7501 | 7,501 | Nested Feature raises ArrowNotImplementedError: Unsupported cast using function cast_struct | closed | 1 | 2025-04-07T12:35:39 | 2025-04-07T12:43:04 | 2025-04-07T12:43:03 | yaner-here | [] | ### Describe the bug
`datasets.Features` seems to be unable to handle json file that contains fields of `list[dict]`.
### Steps to reproduce the bug
```json
// test.json
{"a": 1, "b": [{"c": 2, "d": 3}, {"c": 4, "d": 5}]}
{"a": 5, "b": [{"c": 7, "d": 8}, {"c": 9, "d": 10}]}
```
```python
import json
from datasets i... | false |
2,974,841,921 | https://api.github.com/repos/huggingface/datasets/issues/7500 | https://github.com/huggingface/datasets/issues/7500 | 7,500 | Make `with_format` correctly indicate that a `Dataset` is compatible with PyTorch's `Dataset` class | open | 1 | 2025-04-06T09:56:09 | 2025-04-15T12:57:39 | null | benglewis | [
"enhancement"
] | ### Feature request
Currently `datasets` does not correctly indicate to the Python type-checker (e.g. `pyright` / `Pylance`) that the output of `with_format` is compatible with PyTorch's `Dataloader` since it does not indicate that the HuggingFace `Dataset` is compatible with the PyTorch `Dataset` class. It would be g... | false |
2,973,489,126 | https://api.github.com/repos/huggingface/datasets/issues/7499 | https://github.com/huggingface/datasets/pull/7499 | 7,499 | Added cache dirs to load and file_utils | closed | 5 | 2025-04-04T22:36:04 | 2025-05-07T14:07:34 | 2025-05-07T14:07:34 | gmongaras | [] | When adding "cache_dir" to datasets.load_dataset, the cache_dir gets lost in the function calls, changing the cache dir to the default path. This fixes a few of these instances. | true |
2,969,218,273 | https://api.github.com/repos/huggingface/datasets/issues/7498 | https://github.com/huggingface/datasets/issues/7498 | 7,498 | Extreme memory bandwidth. | open | 0 | 2025-04-03T11:09:08 | 2025-04-03T11:11:22 | null | J0SZ | [] | ### Describe the bug
When I use hf datasets on 4 GPU with 40 workers I get some extreme memory bandwidth of constant ~3GB/s.
However, if I wrap the dataset in `IterableDataset`, this issue is gone and the data also loads way faster (4x faster training on 1 worker).
It seems like the workers don't share memory and b... | false |
2,968,553,693 | https://api.github.com/repos/huggingface/datasets/issues/7497 | https://github.com/huggingface/datasets/issues/7497 | 7,497 | How to convert videos to images? | open | 1 | 2025-04-03T07:08:39 | 2025-04-15T12:35:15 | null | Loki-Lu | [
"enhancement"
] | ### Feature request
Does someone know how to return the images from videos?
### Motivation
I am trying to use openpi(https://github.com/Physical-Intelligence/openpi) to finetune my Lerobot dataset(V2.0 and V2.1). I find that although the codedaset is v2.0, they are different. It seems like Lerobot V2.0 has two versi... | false |
2,967,345,522 | https://api.github.com/repos/huggingface/datasets/issues/7496 | https://github.com/huggingface/datasets/issues/7496 | 7,496 | Json builder: Allow features to override problematic Arrow types | open | 1 | 2025-04-02T19:27:16 | 2025-04-15T13:06:09 | null | edmcman | [
"enhancement"
] | ### Feature request
In the JSON builder, use explicitly requested feature types before or while converting to Arrow.
### Motivation
Working with JSON datasets is really hard because of Arrow. At the very least, it seems like it should be possible to work-around these problems by explicitly setting problematic colum... | false |
2,967,034,060 | https://api.github.com/repos/huggingface/datasets/issues/7495 | https://github.com/huggingface/datasets/issues/7495 | 7,495 | Columns in the dataset obtained though load_dataset do not correspond to the one in the dataset viewer since 3.4.0 | closed | 3 | 2025-04-02T17:01:11 | 2025-07-02T23:24:57 | 2025-07-02T23:24:57 | bruno-hays | [] | ### Describe the bug
I have noticed that on my dataset named [BrunoHays/Accueil_UBS](https://huggingface.co/datasets/BrunoHays/Accueil_UBS), since the version 3.4.0, every column except audio is missing when I load the dataset.
Interestingly, the dataset viewer still shows the correct columns
### Steps to reproduce ... | false |
2,965,347,685 | https://api.github.com/repos/huggingface/datasets/issues/7494 | https://github.com/huggingface/datasets/issues/7494 | 7,494 | Broken links in pdf loading documentation | closed | 1 | 2025-04-02T06:45:22 | 2025-04-15T13:36:25 | 2025-04-15T13:36:04 | VyoJ | [] | ### Describe the bug
Hi, just a couple of small issues I ran into while reading the docs for [loading pdf data](https://huggingface.co/docs/datasets/main/en/document_load):
1. The link for the [`Create a pdf dataset`](https://huggingface.co/docs/datasets/main/en/document_load#pdffolder) points to https://huggingface.... | false |
2,964,025,179 | https://api.github.com/repos/huggingface/datasets/issues/7493 | https://github.com/huggingface/datasets/issues/7493 | 7,493 | push_to_hub does not upload videos | open | 2 | 2025-04-01T17:00:20 | 2025-08-01T18:24:24 | null | DominikVincent | [] | ### Describe the bug
Hello,
I would like to upload a video dataset (some .mp4 files and some segments within them), i.e. rows correspond to subsequences from videos. Videos might be referenced by several rows.
I created a dataset locally and it references the videos and the video readers can read them correctly. I u... | false |
2,959,088,568 | https://api.github.com/repos/huggingface/datasets/issues/7492 | https://github.com/huggingface/datasets/pull/7492 | 7,492 | Closes #7457 | closed | 1 | 2025-03-30T20:41:20 | 2025-04-13T22:05:07 | 2025-04-13T22:05:07 | Harry-Yang0518 | [] | This PR updates the documentation to include the HF_DATASETS_CACHE environment variable, which allows users to customize the cache location for datasets—similar to HF_HUB_CACHE for models. | true |
2,959,085,647 | https://api.github.com/repos/huggingface/datasets/issues/7491 | https://github.com/huggingface/datasets/pull/7491 | 7,491 | docs: update cache.mdx to include HF_DATASETS_CACHE documentation | closed | 1 | 2025-03-30T20:35:03 | 2025-03-30T20:36:40 | 2025-03-30T20:36:40 | Harry-Yang0518 | [] | null | true |
2,958,826,222 | https://api.github.com/repos/huggingface/datasets/issues/7490 | https://github.com/huggingface/datasets/pull/7490 | 7,490 | (refactor) remove redundant logic in _check_valid_index_key | open | 0 | 2025-03-30T11:45:42 | 2025-03-30T11:50:22 | null | suzyahyah | [] | This PR contributes a minor refactor, in a small function in `src/datasets/formatting/formatting.py`. No change in logic.
In the original code, there are separate if-conditionals for `isinstance(key, range)` and `isinstance(key, Iterable)`, with essentially the same logic.
This PR combines these two using a sin... | true |
2,958,204,763 | https://api.github.com/repos/huggingface/datasets/issues/7489 | https://github.com/huggingface/datasets/pull/7489 | 7,489 | fix: loading of datasets from Disk(#7373) | open | 6 | 2025-03-29T16:22:58 | 2025-04-24T16:36:36 | null | sam-hey | [] | Fixes dataset loading from disk by ensuring that memory maps and streams are properly closed.
For more details, see https://github.com/huggingface/datasets/issues/7373. | true |
2,956,559,358 | https://api.github.com/repos/huggingface/datasets/issues/7488 | https://github.com/huggingface/datasets/pull/7488 | 7,488 | Support underscore int read instruction | closed | 2 | 2025-03-28T16:01:15 | 2025-03-28T16:20:44 | 2025-03-28T16:20:43 | lhoestq | [] | close https://github.com/huggingface/datasets/issues/7481 | true |
2,956,533,448 | https://api.github.com/repos/huggingface/datasets/issues/7487 | https://github.com/huggingface/datasets/pull/7487 | 7,487 | Write pdf in map | closed | 1 | 2025-03-28T15:49:25 | 2025-03-28T17:09:53 | 2025-03-28T17:09:51 | lhoestq | [] | Fix this error when mapping a PDF dataset
```
pyarrow.lib.ArrowInvalid: Could not convert <pdfplumber.pdf.PDF object at 0x13498ee40> with type PDF: did not recognize Python value type when inferring an Arrow data type
```
and also let map() outputs be lists of images or pdfs | true |
2,954,042,179 | https://api.github.com/repos/huggingface/datasets/issues/7486 | https://github.com/huggingface/datasets/issues/7486 | 7,486 | `shared_datadir` fixture is missing | closed | 1 | 2025-03-27T18:17:12 | 2025-03-27T19:49:11 | 2025-03-27T19:49:10 | lahwaacz | [] | ### Describe the bug
Running the tests for the latest release fails due to missing `shared_datadir` fixture.
### Steps to reproduce the bug
Running `pytest` while building a package for Arch Linux leads to these errors:
```
==================================== ERRORS ====================================
_________ E... | false |
2,953,696,519 | https://api.github.com/repos/huggingface/datasets/issues/7485 | https://github.com/huggingface/datasets/pull/7485 | 7,485 | set dev version | closed | 1 | 2025-03-27T16:39:34 | 2025-03-27T16:41:59 | 2025-03-27T16:39:42 | lhoestq | [] | null | true |
2,953,677,168 | https://api.github.com/repos/huggingface/datasets/issues/7484 | https://github.com/huggingface/datasets/pull/7484 | 7,484 | release: 3.5.0 | closed | 1 | 2025-03-27T16:33:27 | 2025-03-27T16:35:44 | 2025-03-27T16:34:22 | lhoestq | [] | null | true |
2,951,856,468 | https://api.github.com/repos/huggingface/datasets/issues/7483 | https://github.com/huggingface/datasets/pull/7483 | 7,483 | Support skip_trying_type | closed | 6 | 2025-03-27T07:07:20 | 2025-04-29T04:14:57 | 2025-04-09T09:53:10 | yoshitomo-matsubara | [] | This PR addresses Issue #7472
cc: @lhoestq | true |
2,950,890,368 | https://api.github.com/repos/huggingface/datasets/issues/7482 | https://github.com/huggingface/datasets/pull/7482 | 7,482 | Implement capability to restore non-nullability in Features | closed | 3 | 2025-03-26T22:16:09 | 2025-05-15T15:00:59 | 2025-05-15T15:00:59 | BramVanroy | [] | This PR attempts to keep track of non_nullable pyarrow fields when converting a `pa.Schema` to `Features`. At the same time, when outputting the `arrow_schema`, the original non-nullable fields are restored. This allows for more consistent behavior and avoids breaking behavior as illustrated in #7479.
I am by no mea... | true |
2,950,692,971 | https://api.github.com/repos/huggingface/datasets/issues/7481 | https://github.com/huggingface/datasets/issues/7481 | 7,481 | deal with python `10_000` legal number in slice syntax | closed | 1 | 2025-03-26T20:10:54 | 2025-03-28T16:20:44 | 2025-03-28T16:20:44 | sfc-gh-sbekman | [
"enhancement"
] | ### Feature request
```
In [6]: ds = datasets.load_dataset("HuggingFaceH4/ultrachat_200k", split="train_sft[:1000]")
In [7]: ds = datasets.load_dataset("HuggingFaceH4/ultrachat_200k", split="train_sft[:1_000]")
[dozens of frames skipped]
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py:444, in _s... | false |
2,950,315,214 | https://api.github.com/repos/huggingface/datasets/issues/7480 | https://github.com/huggingface/datasets/issues/7480 | 7,480 | HF_DATASETS_CACHE ignored? | open | 6 | 2025-03-26T17:19:34 | 2025-04-28T10:16:16 | null | stephenroller | [] | ### Describe the bug
I'm struggling to get things to respect HF_DATASETS_CACHE.
Rationale: I'm on a system that uses NFS for homedir, so downloading to NFS is expensive, slow, and wastes valuable quota compared to local disk. Instead, it seems to rely mostly on HF_HUB_CACHE.
Current version: 3.2.1dev. In the process... | false |
2,950,235,396 | https://api.github.com/repos/huggingface/datasets/issues/7479 | https://github.com/huggingface/datasets/issues/7479 | 7,479 | Features.from_arrow_schema is destructive | open | 0 | 2025-03-26T16:46:43 | 2025-03-26T16:46:58 | null | BramVanroy | [] | ### Describe the bug
I came across this, perhaps niche, bug where `Features` does not/cannot account for pyarrow's `nullable=False` option in Fields. Interestingly, I found that in regular "flat" fields this does not necessarily lead to conflicts, but when a non-nullable field is in a struct, an incompatibility arises... | false |
2,948,993,461 | https://api.github.com/repos/huggingface/datasets/issues/7478 | https://github.com/huggingface/datasets/pull/7478 | 7,478 | update fsspec 2025.3.0 | closed | 2 | 2025-03-26T09:53:05 | 2025-03-28T19:15:54 | 2025-03-28T15:51:55 | peteski22 | [] | It appears there have been two releases of fsspec since this dependency was last updated, it would be great if Datasets could be updated so that it didn't hold back the usage of newer fsspec versions in consuming projects.
PR based on https://github.com/huggingface/datasets/pull/7352 | true |
2,947,169,460 | https://api.github.com/repos/huggingface/datasets/issues/7477 | https://github.com/huggingface/datasets/issues/7477 | 7,477 | What is the canonical way to compress a Dataset? | open | 4 | 2025-03-25T16:47:51 | 2025-04-03T09:13:11 | null | eric-czech | [] | Given that Arrow is the preferred backend for a Dataset, what is a user supposed to do if they want concurrent reads, concurrent writes AND on-disk compression for a larger dataset?
Parquet would be the obvious answer except that there is no native support for writing sharded, parquet datasets concurrently [[1](https:... | false |
2,946,997,924 | https://api.github.com/repos/huggingface/datasets/issues/7476 | https://github.com/huggingface/datasets/pull/7476 | 7,476 | Priotitize json | closed | 1 | 2025-03-25T15:44:31 | 2025-03-25T15:47:00 | 2025-03-25T15:45:00 | lhoestq | [] | `datasets` should load the JSON data in https://huggingface.co/datasets/facebook/natural_reasoning, not the PDF | true |
2,946,640,570 | https://api.github.com/repos/huggingface/datasets/issues/7475 | https://github.com/huggingface/datasets/issues/7475 | 7,475 | IterableDataset's state_dict shard_example_idx is always equal to the number of samples in a shard | closed | 8 | 2025-03-25T13:58:07 | 2025-05-06T14:22:19 | 2025-05-06T14:05:07 | bruno-hays | [] | ### Describe the bug
I've noticed a strange behaviour with Iterable state_dict: the value of shard_example_idx is always equal to the amount of samples in a shard.
### Steps to reproduce the bug
I am reusing the example from the doc
```python
from datasets import Dataset
ds = Dataset.from_dict({"a": range(6)}).to_... | false |
2,945,066,258 | https://api.github.com/repos/huggingface/datasets/issues/7474 | https://github.com/huggingface/datasets/pull/7474 | 7,474 | Remove conditions for Python < 3.9 | closed | 3 | 2025-03-25T03:08:04 | 2025-04-16T00:11:06 | 2025-04-15T16:07:55 | cyyever | [] | This PR remove conditions for Python < 3.9. | true |
2,939,034,643 | https://api.github.com/repos/huggingface/datasets/issues/7473 | https://github.com/huggingface/datasets/issues/7473 | 7,473 | Webdataset data format problem | closed | 1 | 2025-03-21T17:23:52 | 2025-03-21T19:19:58 | 2025-03-21T19:19:58 | edmcman | [] | ### Describe the bug
Please see https://huggingface.co/datasets/ejschwartz/idioms/discussions/1
Error code: FileFormatMismatchBetweenSplitsError
All three splits, train, test, and validation, use webdataset. But only the train split has more than one file. How can I force the other two splits to also be interpreted ... | false |
2,937,607,272 | https://api.github.com/repos/huggingface/datasets/issues/7472 | https://github.com/huggingface/datasets/issues/7472 | 7,472 | Label casting during `map` process is canceled after the `map` process | closed | 6 | 2025-03-21T07:56:22 | 2025-04-10T05:11:15 | 2025-04-10T05:11:14 | yoshitomo-matsubara | [] | ### Describe the bug
When preprocessing a multi-label dataset, I introduced a step to convert int labels to float labels as [BCEWithLogitsLoss](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html) expects float labels and forward function of models in transformers package internally use `BCEWithL... | false |
2,937,530,069 | https://api.github.com/repos/huggingface/datasets/issues/7471 | https://github.com/huggingface/datasets/issues/7471 | 7,471 | Adding argument to `_get_data_files_patterns` | closed | 3 | 2025-03-21T07:17:53 | 2025-03-27T12:30:52 | 2025-03-26T07:26:27 | SangbumChoi | [
"enhancement"
] | ### Feature request
How about adding if the user already know about the pattern?
https://github.com/huggingface/datasets/blob/a256b85cbc67aa3f0e75d32d6586afc507cf535b/src/datasets/data_files.py#L252
### Motivation
While using this load_dataset people might use 10M of images for the local files.
However, due to sear... | false |
2,937,236,323 | https://api.github.com/repos/huggingface/datasets/issues/7470 | https://github.com/huggingface/datasets/issues/7470 | 7,470 | Is it possible to shard a single-sharded IterableDataset? | closed | 5 | 2025-03-21T04:33:37 | 2025-05-09T22:51:46 | 2025-03-26T06:49:28 | jonathanasdf | [] | I thought https://github.com/huggingface/datasets/pull/7252 might be applicable but looking at it maybe not.
Say we have a process, eg. a database query, that can return data in slightly different order each time. So, the initial query needs to be run by a single thread (not to mention running multiple times incurs mo... | false |
2,936,606,080 | https://api.github.com/repos/huggingface/datasets/issues/7469 | https://github.com/huggingface/datasets/issues/7469 | 7,469 | Custom split name with the web interface | closed | 0 | 2025-03-20T20:45:59 | 2025-03-21T07:20:37 | 2025-03-21T07:20:37 | vince62s | [] | ### Describe the bug
According the doc here: https://huggingface.co/docs/hub/datasets-file-names-and-splits#custom-split-name
it should infer the split name from the subdir of data or the beg of the name of the files in data.
When doing this manually through web upload it does not work. it uses "train" as a unique spl... | false |
2,934,094,103 | https://api.github.com/repos/huggingface/datasets/issues/7468 | https://github.com/huggingface/datasets/issues/7468 | 7,468 | function `load_dataset` can't solve folder path with regex characters like "[]" | open | 1 | 2025-03-20T05:21:59 | 2025-03-25T10:18:12 | null | Hpeox | [] | ### Describe the bug
When using the `load_dataset` function with a folder path containing regex special characters (such as "[]"), the issue occurs due to how the path is handled in the `resolve_pattern` function. This function passes the unprocessed path directly to `AbstractFileSystem.glob`, which supports regular e... | false |
2,930,067,107 | https://api.github.com/repos/huggingface/datasets/issues/7467 | https://github.com/huggingface/datasets/issues/7467 | 7,467 | load_dataset with streaming hangs on parquet datasets | open | 1 | 2025-03-18T23:33:54 | 2025-03-25T10:28:04 | null | The0nix | [] | ### Describe the bug
When I try to load a dataset with parquet files (e.g. "bigcode/the-stack") the dataset loads, but python interpreter can't exit and hangs
### Steps to reproduce the bug
```python3
import datasets
print('Start')
dataset = datasets.load_dataset("bigcode/the-stack", data_dir="data/yaml", streaming... | false |
2,928,661,327 | https://api.github.com/repos/huggingface/datasets/issues/7466 | https://github.com/huggingface/datasets/pull/7466 | 7,466 | Fix local pdf loading | closed | 1 | 2025-03-18T14:09:06 | 2025-03-18T14:11:52 | 2025-03-18T14:09:21 | lhoestq | [] | fir this error when accessing a local pdf
```
File ~/.pyenv/versions/3.12.2/envs/hf-datasets/lib/python3.12/site-packages/pdfminer/psparser.py:220, in PSBaseParser.seek(self, pos)
218 """Seeks the parser to the given position."""
219 log.debug("seek: %r", pos)
--> 220 self.fp.seek(pos)
221 # reset t... | true |
2,926,478,838 | https://api.github.com/repos/huggingface/datasets/issues/7464 | https://github.com/huggingface/datasets/pull/7464 | 7,464 | Minor fix for metadata files in extension counter | closed | 1 | 2025-03-17T21:57:11 | 2025-03-18T15:21:43 | 2025-03-18T15:21:41 | lhoestq | [] | null | true |
2,925,924,452 | https://api.github.com/repos/huggingface/datasets/issues/7463 | https://github.com/huggingface/datasets/pull/7463 | 7,463 | Adds EXR format to store depth images in float32 | open | 3 | 2025-03-17T17:42:40 | 2025-04-02T12:33:39 | null | ducha-aiki | [] | This PR adds the EXR feature to store depth images (or can be normals, etc) in float32.
It relies on [openexr_numpy](https://github.com/martinResearch/openexr_numpy/tree/main) to manipulate EXR images.
| true |
2,925,612,945 | https://api.github.com/repos/huggingface/datasets/issues/7462 | https://github.com/huggingface/datasets/pull/7462 | 7,462 | set dev version | closed | 1 | 2025-03-17T16:00:53 | 2025-03-17T16:03:31 | 2025-03-17T16:01:08 | lhoestq | [] | null | true |
2,925,608,123 | https://api.github.com/repos/huggingface/datasets/issues/7461 | https://github.com/huggingface/datasets/issues/7461 | 7,461 | List of images behave differently on IterableDataset and Dataset | closed | 2 | 2025-03-17T15:59:23 | 2025-03-18T08:57:17 | 2025-03-18T08:57:16 | FredrikNoren | [] | ### Describe the bug
This code:
```python
def train_iterable_gen():
images = np.array(load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg").resize((128, 128)))
yield {
"images": np.expand_dims(images, axis=0),
"messages": [
... | false |
2,925,605,865 | https://api.github.com/repos/huggingface/datasets/issues/7460 | https://github.com/huggingface/datasets/pull/7460 | 7,460 | release: 3.4.1 | closed | 1 | 2025-03-17T15:58:31 | 2025-03-17T16:01:14 | 2025-03-17T15:59:19 | lhoestq | [] | null | true |
2,925,491,766 | https://api.github.com/repos/huggingface/datasets/issues/7459 | https://github.com/huggingface/datasets/pull/7459 | 7,459 | Fix data_files filtering | closed | 1 | 2025-03-17T15:20:21 | 2025-03-17T15:25:56 | 2025-03-17T15:25:54 | lhoestq | [] | close https://github.com/huggingface/datasets/issues/7458 | true |
2,925,403,528 | https://api.github.com/repos/huggingface/datasets/issues/7458 | https://github.com/huggingface/datasets/issues/7458 | 7,458 | Loading the `laion/filtered-wit` dataset in streaming mode fails on v3.4.0 | closed | 1 | 2025-03-17T14:54:02 | 2025-03-17T16:02:04 | 2025-03-17T15:25:55 | nikita-savelyevv | [] | ### Describe the bug
Loading https://huggingface.co/datasets/laion/filtered-wit in streaming mode fails after update to `datasets==3.4.0`. The dataset loads fine on v3.3.2.
### Steps to reproduce the bug
Steps to reproduce:
```
pip install datastes==3.4.0
python -c "from datasets import load_dataset; load_dataset('l... | false |
2,924,886,467 | https://api.github.com/repos/huggingface/datasets/issues/7457 | https://github.com/huggingface/datasets/issues/7457 | 7,457 | Document the HF_DATASETS_CACHE env variable | closed | 4 | 2025-03-17T12:24:50 | 2025-05-06T15:54:39 | 2025-05-06T15:54:39 | LSerranoPEReN | [
"enhancement"
] | ### Feature request
Hello,
I have a use case where my team is sharing models and dataset in shared directory to avoid duplication.
I noticed that the [cache documentation for datasets](https://huggingface.co/docs/datasets/main/en/cache) only mention the `HF_HOME` environment variable but never the `HF_DATASETS_CACHE`... | false |
2,922,676,278 | https://api.github.com/repos/huggingface/datasets/issues/7456 | https://github.com/huggingface/datasets/issues/7456 | 7,456 | .add_faiss_index and .add_elasticsearch_index returns ImportError at Google Colab | open | 6 | 2025-03-16T00:51:49 | 2025-03-17T15:57:19 | null | MapleBloom | [] | ### Describe the bug
At Google Colab
```!pip install faiss-cpu``` works
```import faiss``` no error
but
```embeddings_dataset.add_faiss_index(column='embeddings')```
returns
```
[/usr/local/lib/python3.11/dist-packages/datasets/search.py](https://localhost:8080/#) in init(self, device, string_factory, metric_type, cus... | false |
2,921,933,250 | https://api.github.com/repos/huggingface/datasets/issues/7455 | https://github.com/huggingface/datasets/issues/7455 | 7,455 | Problems with local dataset after upgrade from 3.3.2 to 3.4.0 | open | 1 | 2025-03-15T09:22:50 | 2025-03-17T16:20:43 | null | andjoer | [] | ### Describe the bug
I was not able to open a local saved dataset anymore that was created using an older datasets version after the upgrade yesterday from datasets 3.3.2 to 3.4.0
The traceback is
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/arrow/... | false |
2,920,760,793 | https://api.github.com/repos/huggingface/datasets/issues/7454 | https://github.com/huggingface/datasets/pull/7454 | 7,454 | set dev version | closed | 1 | 2025-03-14T16:48:19 | 2025-03-14T16:50:31 | 2025-03-14T16:48:28 | lhoestq | [] | null | true |
2,920,719,503 | https://api.github.com/repos/huggingface/datasets/issues/7453 | https://github.com/huggingface/datasets/pull/7453 | 7,453 | release: 3.4.0 | closed | 1 | 2025-03-14T16:30:45 | 2025-03-14T16:38:10 | 2025-03-14T16:38:08 | lhoestq | [] | null | true |
2,920,354,783 | https://api.github.com/repos/huggingface/datasets/issues/7452 | https://github.com/huggingface/datasets/pull/7452 | 7,452 | minor docs changes | closed | 1 | 2025-03-14T14:14:04 | 2025-03-14T14:16:38 | 2025-03-14T14:14:20 | lhoestq | [] | before the release | true |
2,919,835,663 | https://api.github.com/repos/huggingface/datasets/issues/7451 | https://github.com/huggingface/datasets/pull/7451 | 7,451 | Fix resuming after `ds.set_epoch(new_epoch)` | closed | 1 | 2025-03-14T10:31:25 | 2025-03-14T10:50:11 | 2025-03-14T10:50:09 | lhoestq | [] | close https://github.com/huggingface/datasets/issues/7447 | true |
2,916,681,414 | https://api.github.com/repos/huggingface/datasets/issues/7450 | https://github.com/huggingface/datasets/pull/7450 | 7,450 | Add IterableDataset.decode with multithreading | closed | 1 | 2025-03-13T10:41:35 | 2025-03-14T10:35:37 | 2025-03-14T10:35:35 | lhoestq | [] | Useful for dataset streaming for multimodal datasets, and especially for lerobot.
It speeds up streaming up to 20 times.
When decoding is enabled (default), media types are decoded:
* audio -> dict of "array" and "sampling_rate" and "path"
* image -> PIL.Image
* video -> torchvision.io.VideoReader
You can e... | true |
2,916,235,092 | https://api.github.com/repos/huggingface/datasets/issues/7449 | https://github.com/huggingface/datasets/issues/7449 | 7,449 | Cannot load data with different schemas from different parquet files | closed | 2 | 2025-03-13T08:14:49 | 2025-03-17T07:27:48 | 2025-03-17T07:27:46 | li-plus | [] | ### Describe the bug
Cannot load samples with optional fields from different files. The schema cannot be correctly derived.
### Steps to reproduce the bug
When I place two samples with an optional field `some_extra_field` within a single parquet file, it can be loaded via `load_dataset`.
```python
import pandas as ... | false |
2,916,025,762 | https://api.github.com/repos/huggingface/datasets/issues/7448 | https://github.com/huggingface/datasets/issues/7448 | 7,448 | `datasets.disable_caching` doesn't work | open | 2 | 2025-03-13T06:40:12 | 2025-03-22T04:37:07 | null | UCC-team | [] | When I use `Dataset.from_generator(my_gen)` to load my dataset, it simply skips my changes to the generator function.
I tried `datasets.disable_caching`, but it doesn't work! | false |
2,915,233,248 | https://api.github.com/repos/huggingface/datasets/issues/7447 | https://github.com/huggingface/datasets/issues/7447 | 7,447 | Epochs shortened after resuming mid-epoch with Iterable dataset+StatefulDataloader(persistent_workers=True) | closed | 6 | 2025-03-12T21:41:05 | 2025-07-09T23:04:57 | 2025-03-14T10:50:10 | dhruvdcoder | [] | ### Describe the bug
When `torchdata.stateful_dataloader.StatefulDataloader(persistent_workers=True)` the epochs after resuming only iterate through the examples that were left in the epoch when the training was interrupted. For example, in the script below training is interrupted on step 124 (epoch 1) when 3 batches ... | false |
2,913,050,552 | https://api.github.com/repos/huggingface/datasets/issues/7446 | https://github.com/huggingface/datasets/issues/7446 | 7,446 | pyarrow.lib.ArrowTypeError: Expected dict key of type str or bytes, got 'int' | closed | 2 | 2025-03-12T07:48:37 | 2025-07-04T05:14:45 | 2025-07-04T05:14:45 | rangehow | [] | ### Describe the bug
A dict with its keys are all str but get following error
```python
test_data=[{'input_ids':[1,2,3],'labels':[[Counter({2:1})]]}]
dataset = datasets.Dataset.from_list(test_data)
```
```bash
pyarrow.lib.ArrowTypeError: Expected dict key of type str or bytes, got 'int'
```
### Steps to reproduce the... | false |
2,911,507,923 | https://api.github.com/repos/huggingface/datasets/issues/7445 | https://github.com/huggingface/datasets/pull/7445 | 7,445 | Fix small bugs with async map | closed | 1 | 2025-03-11T18:30:57 | 2025-03-13T10:38:03 | 2025-03-13T10:37:58 | lhoestq | [] | helpful for the next PR to enable parallel image/audio/video decoding and make multimodal datasets go brr (e.g. for lerobot)
- fix with_indices
- fix resuming with save_state_dict() / load_state_dict() - omg that wasn't easy
- remove unnecessary decoding in map() to enable parallelism in FormattedExampleIterable l... | true |
2,911,202,445 | https://api.github.com/repos/huggingface/datasets/issues/7444 | https://github.com/huggingface/datasets/issues/7444 | 7,444 | Excessive warnings when resuming an IterableDataset+buffered shuffle+DDP. | open | 1 | 2025-03-11T16:34:39 | 2025-05-13T09:41:03 | null | dhruvdcoder | [] | ### Describe the bug
I have a large dataset that I shared into 1024 shards and save on the disk during pre-processing. During training, I load the dataset using load_from_disk() and convert it into an iterable dataset, shuffle it and split the shards to different DDP nodes using the recommended method.
However, when ... | false |
2,908,585,656 | https://api.github.com/repos/huggingface/datasets/issues/7443 | https://github.com/huggingface/datasets/issues/7443 | 7,443 | index error when num_shards > len(dataset) | open | 1 | 2025-03-10T22:40:59 | 2025-03-10T23:43:08 | null | eminorhan | [] | In `ds.push_to_hub()` and `ds.save_to_disk()`, `num_shards` must be smaller than or equal to the number of rows in the dataset, but currently this is not checked anywhere inside these functions. Attempting to invoke these functions with `num_shards > len(dataset)` should raise an informative `ValueError`.
I frequently... | false |
2,905,543,017 | https://api.github.com/repos/huggingface/datasets/issues/7442 | https://github.com/huggingface/datasets/issues/7442 | 7,442 | Flexible Loader | open | 3 | 2025-03-09T16:55:03 | 2025-03-27T23:58:17 | null | dipta007 | [
"enhancement"
] | ### Feature request
Can we have a utility function that will use `load_from_disk` when given the local path and `load_dataset` if given an HF dataset?
It can be something as simple as this one:
```
def load_hf_dataset(path_or_name):
if os.path.exists(path_or_name):
return load_from_disk(path_or_name)
... | false |
2,904,702,329 | https://api.github.com/repos/huggingface/datasets/issues/7441 | https://github.com/huggingface/datasets/issues/7441 | 7,441 | `drop_last_batch` does not drop the last batch using IterableDataset + interleave_datasets + multi_worker | open | 2 | 2025-03-08T10:28:44 | 2025-03-09T21:27:33 | null | memray | [] | ### Describe the bug
See the script below
`drop_last_batch=True` is defined using map() for each dataset.
The last batch for each dataset is expected to be dropped, id 21-25.
The code behaves as expected when num_workers=0 or 1.
When using num_workers>1, 'a-11', 'b-11', 'a-12', 'b-12' are gone and instead 21 and 22 a... | false |
2,903,740,662 | https://api.github.com/repos/huggingface/datasets/issues/7440 | https://github.com/huggingface/datasets/issues/7440 | 7,440 | IterableDataset raises FileNotFoundError instead of retrying | open | 7 | 2025-03-07T19:14:18 | 2025-07-22T08:15:44 | null | bauwenst | [] | ### Describe the bug
In https://github.com/huggingface/datasets/issues/6843 it was noted that the streaming feature of `datasets` is highly susceptible to outages and doesn't back off for long (or even *at all*).
I was training a model while streaming SlimPajama and training crashed with a `FileNotFoundError`. I can ... | false |
2,900,143,289 | https://api.github.com/repos/huggingface/datasets/issues/7439 | https://github.com/huggingface/datasets/pull/7439 | 7,439 | Fix multi gpu process example | closed | 1 | 2025-03-06T11:29:19 | 2025-03-06T17:07:28 | 2025-03-06T17:06:38 | SwayStar123 | [] | to is not an inplace function.
But i am not sure about this code anyway, i think this is modifying the global variable `model` everytime the function is called? Which is on every batch? So it is juggling the same model on every gpu right? Isnt that very inefficient? | true |
2,899,209,484 | https://api.github.com/repos/huggingface/datasets/issues/7438 | https://github.com/huggingface/datasets/pull/7438 | 7,438 | Allow dataset row indexing with np.int types (#7423) | closed | 4 | 2025-03-06T03:10:43 | 2025-07-23T17:56:22 | 2025-07-23T16:44:42 | DavidRConnell | [] | @lhoestq
Proposed fix for #7423. Added a couple simple tests as requested. I had some test failures related to Java and pyspark even when installing with dev but these don't seem to be related to the changes here and fail for me even on clean main.
The typeerror raised when using the wrong type is: "Wrong key type... | true |
2,899,104,679 | https://api.github.com/repos/huggingface/datasets/issues/7437 | https://github.com/huggingface/datasets/pull/7437 | 7,437 | Use pyupgrade --py39-plus for remaining files | open | 1 | 2025-03-06T02:12:25 | 2025-07-30T08:34:34 | null | cyyever | [] | This work follows #7428. And "requires-python" is set in pyproject.toml | true |
2,898,385,725 | https://api.github.com/repos/huggingface/datasets/issues/7436 | https://github.com/huggingface/datasets/pull/7436 | 7,436 | chore: fix typos | closed | 0 | 2025-03-05T20:17:54 | 2025-04-28T14:00:09 | 2025-04-28T13:51:26 | afuetterer | [] | null | true |
2,895,536,956 | https://api.github.com/repos/huggingface/datasets/issues/7435 | https://github.com/huggingface/datasets/pull/7435 | 7,435 | Refactor `string_to_dict` to return `None` if there is no match instead of raising `ValueError` | closed | 8 | 2025-03-04T22:01:20 | 2025-03-12T16:52:00 | 2025-03-12T16:52:00 | ringohoffman | [] | Making this change, as encouraged here:
* https://github.com/huggingface/datasets/pull/7434#discussion_r1979933054
instead of having the pattern of using `try`-`except` to handle when there is no match, we can instead check if the return value is `None`; we can also assert that the return value should not be `Non... | true |
2,893,075,908 | https://api.github.com/repos/huggingface/datasets/issues/7434 | https://github.com/huggingface/datasets/pull/7434 | 7,434 | Refactor `Dataset.map` to reuse cache files mapped with different `num_proc` | closed | 10 | 2025-03-04T06:12:37 | 2025-05-14T10:45:10 | 2025-05-12T15:14:08 | ringohoffman | [] | Fixes #7433
This refactor unifies `num_proc is None or num_proc == 1` and `num_proc > 1`; instead of handling them completely separately where one uses a list of kwargs and shards and the other just uses a single set of kwargs and `self`, by wrapping the `num_proc == 1` case in a list and making the difference just ... | true |
2,890,240,400 | https://api.github.com/repos/huggingface/datasets/issues/7433 | https://github.com/huggingface/datasets/issues/7433 | 7,433 | `Dataset.map` ignores existing caches and remaps when ran with different `num_proc` | closed | 2 | 2025-03-03T05:51:26 | 2025-05-12T15:14:09 | 2025-05-12T15:14:09 | ringohoffman | [] | ### Describe the bug
If you `map` a dataset and save it to a specific `cache_file_name` with a specific `num_proc`, and then call map again with that same existing `cache_file_name` but a different `num_proc`, the dataset will be re-mapped.
### Steps to reproduce the bug
1. Download a dataset
```python
import datase... | false |
2,887,717,289 | https://api.github.com/repos/huggingface/datasets/issues/7432 | https://github.com/huggingface/datasets/pull/7432 | 7,432 | Fix type annotation | closed | 1 | 2025-02-28T17:28:20 | 2025-03-04T15:53:03 | 2025-03-04T15:53:03 | NeilGirdhar | [] | null | true |
2,887,244,074 | https://api.github.com/repos/huggingface/datasets/issues/7431 | https://github.com/huggingface/datasets/issues/7431 | 7,431 | Issues with large Datasets | open | 4 | 2025-02-28T14:05:22 | 2025-03-04T15:02:26 | null | nikitabelooussovbtis | [] | ### Describe the bug
If the coco annotation file is too large the dataset will not be able to load it, not entirely sure were the issue is but I am guessing it is due to the code trying to load it all as one line into a dataframe. This was for object detections.
My current work around is the following code but would ... | false |
2,886,922,573 | https://api.github.com/repos/huggingface/datasets/issues/7430 | https://github.com/huggingface/datasets/issues/7430 | 7,430 | Error in code "Time to slice and dice" from course "NLP Course" | closed | 2 | 2025-02-28T11:36:10 | 2025-03-05T11:32:47 | 2025-03-03T17:52:15 | Yurkmez | [] | ### Describe the bug
When we execute code
```
frequencies = (
train_df["condition"]
.value_counts()
.to_frame()
.reset_index()
.rename(columns={"index": "condition", "condition": "frequency"})
)
frequencies.head()
```
answer should be like this
condition | frequency
birth control | 27655
dep... | false |
2,886,806,513 | https://api.github.com/repos/huggingface/datasets/issues/7429 | https://github.com/huggingface/datasets/pull/7429 | 7,429 | Improved type annotation | open | 3 | 2025-02-28T10:39:10 | 2025-05-15T12:27:17 | null | saiden89 | [] | I've refined several type annotations throughout the codebase to align with current best practices and enhance overall clarity. Given the complexity of the code, there may still be areas that need further attention. I welcome any feedback or suggestions to make these improvements even better.
- Fixes #7202 | true |
2,886,111,651 | https://api.github.com/repos/huggingface/datasets/issues/7428 | https://github.com/huggingface/datasets/pull/7428 | 7,428 | Use pyupgrade --py39-plus | closed | 3 | 2025-02-28T03:39:44 | 2025-03-22T00:51:20 | 2025-03-05T15:04:16 | cyyever | [] | null | true |
2,886,032,571 | https://api.github.com/repos/huggingface/datasets/issues/7427 | https://github.com/huggingface/datasets/issues/7427 | 7,427 | Error splitting the input into NAL units. | open | 2 | 2025-02-28T02:30:15 | 2025-03-04T01:40:28 | null | MengHao666 | [] | ### Describe the bug
I am trying to finetune qwen2.5-vl on 16 * 80G GPUS, and I use `LLaMA-Factory` and set `preprocessing_num_workers=16`. However, I met the following error and the program seem to got crush. It seems that the error come from `datasets` library
The error logging is like following:
```text
Convertin... | false |
2,883,754,507 | https://api.github.com/repos/huggingface/datasets/issues/7426 | https://github.com/huggingface/datasets/pull/7426 | 7,426 | fix: None default with bool type on load creates typing error | closed | 0 | 2025-02-27T08:11:36 | 2025-03-04T15:53:40 | 2025-03-04T15:53:40 | stephantul | [] | Hello!
Pyright flags any use of `load_dataset` as an error, because the default for `trust_remote_code` is `None`, but the function is typed as `bool`, not `Optional[bool]`. I changed the type and docstrings to reflect this, but no other code was touched.
| true |
2,883,684,686 | https://api.github.com/repos/huggingface/datasets/issues/7425 | https://github.com/huggingface/datasets/issues/7425 | 7,425 | load_dataset("livecodebench/code_generation_lite", version_tag="release_v2") TypeError: 'NoneType' object is not callable | open | 10 | 2025-02-27T07:36:02 | 2025-03-27T05:05:33 | null | dshwei | [] | ### Describe the bug
from datasets import load_dataset
lcb_codegen = load_dataset("livecodebench/code_generation_lite", version_tag="release_v2")
or
configs = get_dataset_config_names("livecodebench/code_generation_lite", trust_remote_code=True)
both error:
Traceback (most recent call last):
File "", line 1, in
File... | false |
2,882,663,621 | https://api.github.com/repos/huggingface/datasets/issues/7424 | https://github.com/huggingface/datasets/pull/7424 | 7,424 | Faster folder based builder + parquet support + allow repeated media + use torchvideo | closed | 1 | 2025-02-26T19:55:18 | 2025-03-05T18:51:00 | 2025-03-05T17:41:23 | lhoestq | [] | This will be useful for LeRobotDataset (robotics datasets for [lerobot](https://github.com/huggingface/lerobot) based on videos)
Impacted builders:
- ImageFolder
- AudioFolder
- VideoFolder
Improvements:
- faster to stream (got a 5x speed up on an image dataset)
- improved RAM usage
- support for metadata.p... | true |
2,879,271,409 | https://api.github.com/repos/huggingface/datasets/issues/7423 | https://github.com/huggingface/datasets/issues/7423 | 7,423 | Row indexing a dataset with numpy integers | closed | 1 | 2025-02-25T18:44:45 | 2025-07-28T02:23:17 | 2025-07-28T02:23:17 | DavidRConnell | [
"enhancement"
] | ### Feature request
Allow indexing datasets with a scalar numpy integer type.
### Motivation
Indexing a dataset with a scalar numpy.int* object raises a TypeError. This is due to the test in `datasets/formatting/formatting.py:key_to_query_type`
``` python
def key_to_query_type(key: Union[int, slice, range, str, Ite... | false |
2,878,369,052 | https://api.github.com/repos/huggingface/datasets/issues/7421 | https://github.com/huggingface/datasets/issues/7421 | 7,421 | DVC integration broken | open | 1 | 2025-02-25T13:14:31 | 2025-03-03T17:42:02 | null | maxstrobel | [] | ### Describe the bug
The DVC integration seems to be broken.
Followed this guide: https://dvc.org/doc/user-guide/integrations/huggingface
### Steps to reproduce the bug
#### Script to reproduce
~~~python
from datasets import load_dataset
dataset = load_dataset(
"csv",
data_files="dvc://workshop/satellite-d... | false |
2,876,281,928 | https://api.github.com/repos/huggingface/datasets/issues/7420 | https://github.com/huggingface/datasets/issues/7420 | 7,420 | better correspondence between cached and saved datasets created using from_generator | open | 0 | 2025-02-24T22:14:37 | 2025-02-26T03:10:22 | null | vttrifonov | [
"enhancement"
] | ### Feature request
At the moment `.from_generator` can only create a dataset that lives in the cache. The cached dataset cannot be loaded with `load_from_disk` because the cache folder is missing `state.json`. So the only way to convert this cached dataset to a regular is to use `save_to_disk` which needs to create a... | false |
2,875,635,320 | https://api.github.com/repos/huggingface/datasets/issues/7419 | https://github.com/huggingface/datasets/issues/7419 | 7,419 | Import order crashes script execution | open | 0 | 2025-02-24T17:03:43 | 2025-02-24T17:03:43 | null | DamienMatias | [] | ### Describe the bug
Hello,
I'm trying to convert an HF dataset into a TFRecord so I'm importing `tensorflow` and `datasets` to do so.
Depending in what order I'm importing those librairies, my code hangs forever and is unkillable (CTRL+C doesn't work, I need to kill my shell entirely).
Thank you for your help
🙏
... | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.