id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,156,136,624 | https://api.github.com/repos/huggingface/datasets/issues/7624 | https://github.com/huggingface/datasets/issues/7624 | 7,624 | #Dataset Make "image" column appear first in dataset preview UI | closed | 2 | 2025-06-18T09:25:19 | 2025-06-20T07:46:43 | 2025-06-20T07:46:43 | jcerveto | [] | Hi!
#Dataset
I’m currently uploading a dataset that includes an `"image"` column (PNG files), along with some metadata columns. The dataset is loaded from a .jsonl file. My goal is to have the "image" column appear as the first column in the dataset card preview UI on the :hugs: Hub.
However, at the moment, the `"im... | false |
3,154,519,684 | https://api.github.com/repos/huggingface/datasets/issues/7623 | https://github.com/huggingface/datasets/pull/7623 | 7,623 | fix: raise error in FolderBasedBuilder when data_dir and data_files are missing | closed | 2 | 2025-06-17T19:16:34 | 2025-06-18T14:18:41 | 2025-06-18T14:18:41 | ArjunJagdale | [] | ### Related Issues/PRs
Fixes #6152
---
### What changes are proposed in this pull request?
This PR adds a dedicated validation check in the `_info()` method of the `FolderBasedBuilder` class to ensure that users provide either `data_dir` or `data_files` when loading folder-based datasets (such as `audiofold... | true |
3,154,398,557 | https://api.github.com/repos/huggingface/datasets/issues/7622 | https://github.com/huggingface/datasets/pull/7622 | 7,622 | Guard against duplicate builder_kwargs/config_kwargs in load_dataset_… | closed | 1 | 2025-06-17T18:28:35 | 2025-07-23T14:06:20 | 2025-07-23T14:06:20 | Shohail-Ismail | [] | …builder (#4910 )
### What does this PR do?
Fixes edge case in `load_dataset_builder` by raising a `TypeError` if the same key exists in both `builder_kwargs` and `config_kwargs`.
### Implementation details
- Added a guard clause in `load_dataset_builder` to detect duplicate keys between `builder_kwargs` an... | true |
3,153,780,963 | https://api.github.com/repos/huggingface/datasets/issues/7621 | https://github.com/huggingface/datasets/pull/7621 | 7,621 | minor docs data aug | closed | 1 | 2025-06-17T14:46:57 | 2025-06-17T14:50:28 | 2025-06-17T14:47:11 | lhoestq | [] | null | true |
3,153,565,183 | https://api.github.com/repos/huggingface/datasets/issues/7620 | https://github.com/huggingface/datasets/pull/7620 | 7,620 | Fixes in docs | closed | 1 | 2025-06-17T13:41:54 | 2025-06-17T13:58:26 | 2025-06-17T13:58:24 | lhoestq | [] | before release 4.0
(I also did minor improvements to `features` to not show their `id=None` in their `__repr__()`) | true |
3,153,058,517 | https://api.github.com/repos/huggingface/datasets/issues/7619 | https://github.com/huggingface/datasets/issues/7619 | 7,619 | `from_list` fails while `from_generator` works for large datasets | open | 4 | 2025-06-17T10:58:55 | 2025-06-29T16:34:44 | null | abdulfatir | [] | ### Describe the bug
I am constructing a large time series dataset and observed that first constructing a list of entries and then using `Dataset.from_list` led to a crash as the number of items became large. However, this is not a problem when using `Dataset.from_generator`.
### Steps to reproduce the bug
#### Snip... | false |
3,148,912,897 | https://api.github.com/repos/huggingface/datasets/issues/7618 | https://github.com/huggingface/datasets/pull/7618 | 7,618 | fix: raise error when folder-based datasets are loaded without data_dir or data_files | open | 1 | 2025-06-16T07:43:59 | 2025-06-16T12:13:26 | null | ArjunJagdale | [] |
### Related Issues/PRs
<!-- Uncomment 'Resolve' if this PR can close the linked items. -->
<!-- Resolve --> #6152
---
### What changes are proposed in this pull request?
This PR adds an early validation step for folder-based datasets (like `audiofolder`) to prevent silent fallback behavior.
**Before t... | true |
3,148,102,085 | https://api.github.com/repos/huggingface/datasets/issues/7617 | https://github.com/huggingface/datasets/issues/7617 | 7,617 | Unwanted column padding in nested lists of dicts | closed | 1 | 2025-06-15T22:06:17 | 2025-06-16T13:43:31 | 2025-06-16T13:43:31 | qgallouedec | [] | ```python
from datasets import Dataset
dataset = Dataset.from_dict({
"messages": [
[
{"a": "...",},
{"b": "...",},
],
]
})
print(dataset[0])
```
What I get:
```
{'messages': [{'a': '...', 'b': None}, {'a': None, 'b': '...'}]}
```
What I want:
```
{'messages': [{'a': '... | false |
3,144,506,665 | https://api.github.com/repos/huggingface/datasets/issues/7616 | https://github.com/huggingface/datasets/pull/7616 | 7,616 | Torchcodec decoding | closed | 5 | 2025-06-13T19:06:07 | 2025-06-19T18:25:49 | 2025-06-19T18:25:49 | TyTodd | [] | Closes #7607
## New signatures
### Audio
```python
Audio(sampling_rate: Optional[int] = None, mono: bool = True, decode: bool = True, stream_index: Optional[int] = None)
Audio.encode_example(self, value: Union[str, bytes, bytearray, dict, "AudioDecoder"]) -> dict
Audio.decode_example(self, value: dict, token_... | true |
3,143,443,498 | https://api.github.com/repos/huggingface/datasets/issues/7615 | https://github.com/huggingface/datasets/pull/7615 | 7,615 | remove unused code | closed | 1 | 2025-06-13T12:37:30 | 2025-06-13T12:39:59 | 2025-06-13T12:37:40 | lhoestq | [] | null | true |
3,143,381,638 | https://api.github.com/repos/huggingface/datasets/issues/7614 | https://github.com/huggingface/datasets/pull/7614 | 7,614 | Lazy column | closed | 1 | 2025-06-13T12:12:57 | 2025-06-17T13:08:51 | 2025-06-17T13:08:49 | lhoestq | [] | Same as https://github.com/huggingface/datasets/pull/7564 but for `Dataset`, cc @TopCoder2K FYI
e.g. `ds[col]` now returns a lazy Column instead of a list
This way calling `ds[col][idx]` only loads the required data in memory
(bonus: also supports subfields access with `ds[col][subcol][idx]`)
the breaking c... | true |
3,142,819,991 | https://api.github.com/repos/huggingface/datasets/issues/7613 | https://github.com/huggingface/datasets/pull/7613 | 7,613 | fix parallel push_to_hub in dataset_dict | closed | 1 | 2025-06-13T09:02:24 | 2025-06-13T12:30:23 | 2025-06-13T12:30:22 | lhoestq | [] | null | true |
3,141,905,049 | https://api.github.com/repos/huggingface/datasets/issues/7612 | https://github.com/huggingface/datasets/issues/7612 | 7,612 | Provide an option of robust dataset iterator with error handling | open | 2 | 2025-06-13T00:40:48 | 2025-06-24T16:52:30 | null | wwwjn | [
"enhancement"
] | ### Feature request
Adding an option to skip corrupted data samples. Currently the datasets behavior is throwing errors if the data sample if corrupted and let user aware and handle the data corruption. When I tried to try-catch the error at user level, the iterator will raise StopIteration when I called next() again.... | false |
3,141,383,940 | https://api.github.com/repos/huggingface/datasets/issues/7611 | https://github.com/huggingface/datasets/issues/7611 | 7,611 | Code example for dataset.add_column() does not reflect correct way to use function | closed | 2 | 2025-06-12T19:42:29 | 2025-07-17T13:14:18 | 2025-07-17T13:14:18 | shaily99 | [] | https://github.com/huggingface/datasets/blame/38d4d0e11e22fdbc4acf373d2421d25abeb43439/src/datasets/arrow_dataset.py#L5925C10-L5925C10
The example seems to suggest that dataset.add_column() can add column inplace, however, this is wrong -- it cannot. It returns a new dataset with the column added to it. | false |
3,141,281,560 | https://api.github.com/repos/huggingface/datasets/issues/7610 | https://github.com/huggingface/datasets/issues/7610 | 7,610 | i cant confirm email | open | 2 | 2025-06-12T18:58:49 | 2025-06-27T14:36:47 | null | lykamspam | [] | ### Describe the bug
This is dificult, I cant confirm email because I'm not get any email!
I cant post forum because I cant confirm email!
I can send help desk because... no exist on web page.
paragraph 44
### Steps to reproduce the bug
rthjrtrt
### Expected behavior
ewtgfwetgf
### Environment info
sdgfswdegfwe | false |
3,140,373,128 | https://api.github.com/repos/huggingface/datasets/issues/7609 | https://github.com/huggingface/datasets/pull/7609 | 7,609 | Update `_dill.py` to use `co_linetable` for Python 3.10+ in place of `co_lnotab` | closed | 4 | 2025-06-12T13:47:01 | 2025-06-16T12:14:10 | 2025-06-16T12:14:08 | qgallouedec | [] | Not 100% about this one, but it seems to be recommended.
```
/fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead.
```
Tests pass locally. And the warning is gone with this change.
https://peps.python.or... | true |
3,137,564,259 | https://api.github.com/repos/huggingface/datasets/issues/7608 | https://github.com/huggingface/datasets/pull/7608 | 7,608 | Tests typing and fixes for push_to_hub | closed | 1 | 2025-06-11T17:13:52 | 2025-06-12T21:15:23 | 2025-06-12T21:15:21 | lhoestq | [] | todo:
- [x] fix TestPushToHub.test_push_dataset_dict_to_hub_iterable_num_proc | true |
3,135,722,560 | https://api.github.com/repos/huggingface/datasets/issues/7607 | https://github.com/huggingface/datasets/issues/7607 | 7,607 | Video and audio decoding with torchcodec | closed | 16 | 2025-06-11T07:02:30 | 2025-06-19T18:25:49 | 2025-06-19T18:25:49 | TyTodd | [
"enhancement"
] | ### Feature request
Pytorch is migrating video processing to torchcodec and it's pretty cool. It would be nice to migrate both the audio and video features to use torchcodec instead of torchaudio/video.
### Motivation
My use case is I'm working on a multimodal AV model, and what's nice about torchcodec is I can extr... | false |
3,133,848,546 | https://api.github.com/repos/huggingface/datasets/issues/7606 | https://github.com/huggingface/datasets/pull/7606 | 7,606 | Add `num_proc=` to `.push_to_hub()` (Dataset and IterableDataset) | closed | 1 | 2025-06-10T14:35:10 | 2025-06-11T16:47:28 | 2025-06-11T16:47:25 | lhoestq | [] | null | true |
3,131,636,882 | https://api.github.com/repos/huggingface/datasets/issues/7605 | https://github.com/huggingface/datasets/pull/7605 | 7,605 | Make `push_to_hub` atomic (#7600) | closed | 4 | 2025-06-09T22:29:38 | 2025-06-23T19:32:08 | 2025-06-23T19:32:08 | sharvil | [] | null | true |
3,130,837,169 | https://api.github.com/repos/huggingface/datasets/issues/7604 | https://github.com/huggingface/datasets/pull/7604 | 7,604 | Docs and more methods for IterableDataset: push_to_hub, to_parquet... | closed | 1 | 2025-06-09T16:44:40 | 2025-06-10T13:15:23 | 2025-06-10T13:15:21 | lhoestq | [] | to_csv, to_json, to_sql, to_pandas, to_polars, to_dict, to_list | true |
3,130,394,563 | https://api.github.com/repos/huggingface/datasets/issues/7603 | https://github.com/huggingface/datasets/pull/7603 | 7,603 | No TF in win tests | closed | 1 | 2025-06-09T13:56:34 | 2025-06-09T15:33:31 | 2025-06-09T15:33:30 | lhoestq | [] | null | true |
3,128,758,924 | https://api.github.com/repos/huggingface/datasets/issues/7602 | https://github.com/huggingface/datasets/pull/7602 | 7,602 | Enhance error handling and input validation across multiple modules | open | 0 | 2025-06-08T23:01:06 | 2025-06-08T23:01:06 | null | mohiuddin-khan-shiam | [] | This PR improves the robustness and user experience by:
1. **Audio Module**:
- Added clear error messages when required fields ('path' or 'bytes') are missing in audio encoding
2. **DatasetDict**:
- Enhanced key access error messages to show available splits when an invalid key is accessed
3. **NonMuta... | true |
3,127,296,182 | https://api.github.com/repos/huggingface/datasets/issues/7600 | https://github.com/huggingface/datasets/issues/7600 | 7,600 | `push_to_hub` is not concurrency safe (dataset schema corruption) | closed | 4 | 2025-06-07T17:28:56 | 2025-07-31T10:00:50 | 2025-07-31T10:00:50 | sharvil | [] | ### Describe the bug
Concurrent processes modifying and pushing a dataset can overwrite each others' dataset card, leaving the dataset unusable.
Consider this scenario:
- we have an Arrow dataset
- there are `N` configs of the dataset
- there are `N` independent processes operating on each of the individual configs (... | false |
3,125,620,119 | https://api.github.com/repos/huggingface/datasets/issues/7599 | https://github.com/huggingface/datasets/issues/7599 | 7,599 | My already working dataset (when uploaded few months ago) now is ignoring metadata.jsonl | closed | 3 | 2025-06-06T18:59:00 | 2025-06-16T15:18:00 | 2025-06-16T15:18:00 | JuanCarlosMartinezSevilla | [] | ### Describe the bug
Hi everyone, I uploaded my dataset https://huggingface.co/datasets/PRAIG/SMB a few months ago while I was waiting for a conference acceptance response. Without modifying anything in the dataset repository now the Dataset viewer is not rendering the metadata.jsonl annotations, neither it is being d... | false |
3,125,184,457 | https://api.github.com/repos/huggingface/datasets/issues/7598 | https://github.com/huggingface/datasets/pull/7598 | 7,598 | fix string_to_dict usage for windows | closed | 1 | 2025-06-06T15:54:29 | 2025-06-06T16:12:22 | 2025-06-06T16:12:21 | lhoestq | [] | null | true |
3,123,962,709 | https://api.github.com/repos/huggingface/datasets/issues/7597 | https://github.com/huggingface/datasets/issues/7597 | 7,597 | Download datasets from a private hub in 2025 | closed | 2 | 2025-06-06T07:55:19 | 2025-06-13T13:46:00 | 2025-06-13T13:46:00 | DanielSchuhmacher | [
"enhancement"
] | ### Feature request
In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature.
The obvious workaround is to clone the repo first and then l... | false |
3,122,595,042 | https://api.github.com/repos/huggingface/datasets/issues/7596 | https://github.com/huggingface/datasets/pull/7596 | 7,596 | Add albumentations to use dataset | closed | 3 | 2025-06-05T20:39:46 | 2025-06-17T18:38:08 | 2025-06-17T14:44:30 | ternaus | [] | 1. Fixed broken link to the list of transforms in torchvison.
2. Extended section about video image augmentations with an example from Albumentations. | true |
3,121,689,436 | https://api.github.com/repos/huggingface/datasets/issues/7595 | https://github.com/huggingface/datasets/pull/7595 | 7,595 | Add `IterableDataset.push_to_hub()` | closed | 1 | 2025-06-05T15:29:32 | 2025-06-06T16:12:37 | 2025-06-06T16:12:36 | lhoestq | [] | Basic implementation, which writes one shard per input dataset shard.
This is to be improved later.
Close https://github.com/huggingface/datasets/issues/5665
PS: for image/audio datasets structured as actual image/audio files (not parquet), you can sometimes speed it up with `ds.decode(num_threads=...).push_to_h... | true |
3,120,799,626 | https://api.github.com/repos/huggingface/datasets/issues/7594 | https://github.com/huggingface/datasets/issues/7594 | 7,594 | Add option to ignore keys/columns when loading a dataset from jsonl(or any other data format) | open | 8 | 2025-06-05T11:12:45 | 2025-06-28T09:03:00 | null | avishaiElmakies | [
"enhancement"
] | ### Feature request
Hi, I would like the option to ignore keys/columns when loading a dataset from files (e.g. jsonl).
### Motivation
I am working on a dataset which is built on jsonl. It seems the dataset is unclean and a column has different types in each row. I can't clean this or remove the column (It is not my ... | false |
3,118,812,368 | https://api.github.com/repos/huggingface/datasets/issues/7593 | https://github.com/huggingface/datasets/pull/7593 | 7,593 | Fix broken link to albumentations | closed | 2 | 2025-06-04T19:00:13 | 2025-06-05T16:37:02 | 2025-06-05T16:36:32 | ternaus | [] | A few months back I rewrote all docs at [https://albumentations.ai/docs](https://albumentations.ai/docs), and some pages changed their links.
In this PR fixed link to the most recent doc in Albumentations about bounding boxes and it's format.
Fix a few typos in the doc as well. | true |
3,118,203,880 | https://api.github.com/repos/huggingface/datasets/issues/7592 | https://github.com/huggingface/datasets/pull/7592 | 7,592 | Remove scripts altogether | closed | 6 | 2025-06-04T15:14:11 | 2025-08-04T15:17:05 | 2025-06-09T16:45:27 | lhoestq | [] | TODO:
- [x] remplace fixtures based on script with no-script fixtures
- [x] windaube | true |
3,117,816,388 | https://api.github.com/repos/huggingface/datasets/issues/7591 | https://github.com/huggingface/datasets/issues/7591 | 7,591 | Add num_proc parameter to push_to_hub | open | 3 | 2025-06-04T13:19:15 | 2025-06-27T06:13:54 | null | SwayStar123 | [
"enhancement"
] | ### Feature request
A number of processes parameter to the dataset.push_to_hub method
### Motivation
Shards are currently uploaded serially which makes it slow for many shards, uploading can be done in parallel and much faster
| false |
3,101,654,892 | https://api.github.com/repos/huggingface/datasets/issues/7590 | https://github.com/huggingface/datasets/issues/7590 | 7,590 | `Sequence(Features(...))` causes PyArrow cast error in `load_dataset` despite correct schema. | closed | 6 | 2025-05-29T22:53:36 | 2025-07-19T22:45:08 | 2025-07-19T22:45:08 | AHS-uni | [] | ### Description
When loading a dataset with a field declared as a list of structs using `Sequence(Features(...))`, `load_dataset` incorrectly infers the field as a plain `struct<...>` instead of a `list<struct<...>>`. This leads to the following error:
```
ArrowNotImplementedError: Unsupported cast from list<item: st... | false |
3,101,119,704 | https://api.github.com/repos/huggingface/datasets/issues/7589 | https://github.com/huggingface/datasets/pull/7589 | 7,589 | feat: use content defined chunking | open | 3 | 2025-05-29T18:19:41 | 2025-07-25T11:56:51 | null | kszucs | [] | Use content defined chunking by default when writing parquet files.
- [x] set the parameters in `io.parquet.ParquetDatasetReader`
- [x] set the parameters in `arrow_writer.ParquetWriter`
It requires a new pyarrow pin ">=21.0.0" which is released now. | true |
3,094,012,025 | https://api.github.com/repos/huggingface/datasets/issues/7588 | https://github.com/huggingface/datasets/issues/7588 | 7,588 | ValueError: Invalid pattern: '**' can only be an entire path component [Colab] | closed | 5 | 2025-05-27T13:46:05 | 2025-05-30T13:22:52 | 2025-05-30T01:26:30 | wkambale | [] | ### Describe the bug
I have a dataset on HF [here](https://huggingface.co/datasets/kambale/luganda-english-parallel-corpus) that i've previously used to train a translation model [here](https://huggingface.co/kambale/pearl-11m-translate).
now i changed a few hyperparameters to increase number of tokens for the model,... | false |
3,091,834,987 | https://api.github.com/repos/huggingface/datasets/issues/7587 | https://github.com/huggingface/datasets/pull/7587 | 7,587 | load_dataset splits typing | closed | 1 | 2025-05-26T18:28:40 | 2025-05-26T18:31:10 | 2025-05-26T18:29:57 | lhoestq | [] | close https://github.com/huggingface/datasets/issues/7583 | true |
3,091,320,431 | https://api.github.com/repos/huggingface/datasets/issues/7586 | https://github.com/huggingface/datasets/issues/7586 | 7,586 | help is appreciated | open | 1 | 2025-05-26T14:00:42 | 2025-05-26T18:21:57 | null | rajasekarnp1 | [
"enhancement"
] | ### Feature request
https://github.com/rajasekarnp1/neural-audio-upscaler/tree/main
### Motivation
ai model develpment and audio
### Your contribution
ai model develpment and audio | false |
3,091,227,921 | https://api.github.com/repos/huggingface/datasets/issues/7585 | https://github.com/huggingface/datasets/pull/7585 | 7,585 | Avoid multiple default config names | closed | 1 | 2025-05-26T13:27:59 | 2025-06-05T12:41:54 | 2025-06-05T12:41:52 | albertvillanova | [] | Fix duplicating default config names.
Currently, when calling `push_to_hub(set_default=True` with 2 different config names, both are set as default.
Moreover, this will generate an error next time we try to push another default config name, raised by `MetadataConfigs.get_default_config_name`:
https://github.com... | true |
3,090,255,023 | https://api.github.com/repos/huggingface/datasets/issues/7584 | https://github.com/huggingface/datasets/issues/7584 | 7,584 | Add LMDB format support | open | 1 | 2025-05-26T07:10:13 | 2025-05-26T18:23:37 | null | trotsky1997 | [
"enhancement"
] | ### Feature request
Add LMDB format support for large memory-mapping files
### Motivation
Add LMDB format support for large memory-mapping files
### Your contribution
I'm trying to add it | false |
3,088,987,757 | https://api.github.com/repos/huggingface/datasets/issues/7583 | https://github.com/huggingface/datasets/issues/7583 | 7,583 | load_dataset type stubs reject List[str] for split parameter, but runtime supports it | closed | 0 | 2025-05-25T02:33:18 | 2025-05-26T18:29:58 | 2025-05-26T18:29:58 | hierr | [] | ### Describe the bug
The [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) method accepts a `List[str]` as the split parameter at runtime, however, the current type stubs restrict the split parameter to `Union[str, Split, None]`. This causes type che... | false |
3,083,515,643 | https://api.github.com/repos/huggingface/datasets/issues/7582 | https://github.com/huggingface/datasets/pull/7582 | 7,582 | fix: Add embed_storage in Pdf feature | closed | 1 | 2025-05-22T14:06:29 | 2025-05-22T14:17:38 | 2025-05-22T14:17:36 | AndreaFrancis | [] | Add missing `embed_storage` method in Pdf feature (Same as in Audio and Image) | true |
3,083,080,413 | https://api.github.com/repos/huggingface/datasets/issues/7581 | https://github.com/huggingface/datasets/pull/7581 | 7,581 | Add missing property on `RepeatExamplesIterable` | closed | 0 | 2025-05-22T11:41:07 | 2025-06-05T12:41:30 | 2025-06-05T12:41:29 | SilvanCodes | [] | Fixes #7561 | true |
3,082,993,027 | https://api.github.com/repos/huggingface/datasets/issues/7580 | https://github.com/huggingface/datasets/issues/7580 | 7,580 | Requesting a specific split (eg: test) still downloads all (train, test, val) data when streaming=False. | open | 1 | 2025-05-22T11:08:16 | 2025-05-26T18:40:31 | null | s3pi | [] | ### Describe the bug
When using load_dataset() from the datasets library (in load.py), specifying a particular split (e.g., split="train") still results in downloading data for all splits when streaming=False. This happens during the builder_instance.download_and_prepare() call.
This behavior leads to unnecessary band... | false |
3,081,849,022 | https://api.github.com/repos/huggingface/datasets/issues/7579 | https://github.com/huggingface/datasets/pull/7579 | 7,579 | Fix typos in PDF and Video documentation | closed | 1 | 2025-05-22T02:27:40 | 2025-05-22T12:53:49 | 2025-05-22T12:53:47 | AndreaFrancis | [] | null | true |
3,080,833,740 | https://api.github.com/repos/huggingface/datasets/issues/7577 | https://github.com/huggingface/datasets/issues/7577 | 7,577 | arrow_schema is not compatible with list | closed | 3 | 2025-05-21T16:37:01 | 2025-05-26T18:49:51 | 2025-05-26T18:32:55 | jonathanshen-upwork | [] | ### Describe the bug
```
import datasets
f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})
f.arrow_schema
Traceback (most recent call last):
File "datasets/features/features.py", line 1826, in arrow_schema
return pa.schema(self.type).with_metadata({"huggingface": json.dumps(hf_metadata)})
... | false |
3,080,450,538 | https://api.github.com/repos/huggingface/datasets/issues/7576 | https://github.com/huggingface/datasets/pull/7576 | 7,576 | Fix regex library warnings | closed | 1 | 2025-05-21T14:31:58 | 2025-06-05T13:35:16 | 2025-06-05T12:37:55 | emmanuel-ferdman | [] | # PR Summary
This small PR resolves the regex library warnings showing starting Python3.11:
```python
DeprecationWarning: 'count' is passed as positional argument
``` | true |
3,080,228,718 | https://api.github.com/repos/huggingface/datasets/issues/7575 | https://github.com/huggingface/datasets/pull/7575 | 7,575 | [MINOR:TYPO] Update save_to_disk docstring | closed | 0 | 2025-05-21T13:22:24 | 2025-06-05T12:39:13 | 2025-06-05T12:39:13 | cakiki | [] | r/hub/filesystem in save_to_disk | true |
3,079,641,072 | https://api.github.com/repos/huggingface/datasets/issues/7574 | https://github.com/huggingface/datasets/issues/7574 | 7,574 | Missing multilingual directions in IWSLT2017 dataset's processing script | open | 2 | 2025-05-21T09:53:17 | 2025-05-26T18:36:38 | null | andy-joy-25 | [] | ### Describe the bug
Hi,
Upon using `iwslt2017.py` in `IWSLT/iwslt2017` on the Hub for loading the datasets, I am unable to obtain the datasets for the language pairs `de-it`, `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` using it. These 6 pairs do not show up when using `get_dataset_config_names()` to obtain the ... | false |
3,076,415,382 | https://api.github.com/repos/huggingface/datasets/issues/7573 | https://github.com/huggingface/datasets/issues/7573 | 7,573 | No Samsum dataset | closed | 4 | 2025-05-20T09:54:35 | 2025-07-21T18:34:34 | 2025-06-18T12:52:23 | IgorKasianenko | [] | ### Describe the bug
https://huggingface.co/datasets/Samsung/samsum dataset not found error 404
Originated from https://github.com/meta-llama/llama-cookbook/issues/948
### Steps to reproduce the bug
go to website https://huggingface.co/datasets/Samsung/samsum
see the error
also downloading it with python throws
`... | false |
3,074,529,251 | https://api.github.com/repos/huggingface/datasets/issues/7572 | https://github.com/huggingface/datasets/pull/7572 | 7,572 | Fixed typos | closed | 1 | 2025-05-19T17:16:59 | 2025-06-05T12:25:42 | 2025-06-05T12:25:41 | TopCoder2K | [] | More info: [comment](https://github.com/huggingface/datasets/pull/7564#issuecomment-2863391781). | true |
3,074,116,942 | https://api.github.com/repos/huggingface/datasets/issues/7571 | https://github.com/huggingface/datasets/pull/7571 | 7,571 | fix string_to_dict test | closed | 1 | 2025-05-19T14:49:23 | 2025-05-19T14:52:24 | 2025-05-19T14:49:28 | lhoestq | [] | null | true |
3,065,966,529 | https://api.github.com/repos/huggingface/datasets/issues/7570 | https://github.com/huggingface/datasets/issues/7570 | 7,570 | Dataset lib seems to broke after fssec lib update | closed | 3 | 2025-05-15T11:45:06 | 2025-06-13T00:44:27 | 2025-06-13T00:44:27 | sleepingcat4 | [] | ### Describe the bug
I am facing an issue since today where HF's dataset is acting weird and in some instances failure to recognise a valid dataset entirely, I think it is happening due to recent change in `fsspec` lib as using this command fixed it for me in one-time: `!pip install -U datasets huggingface_hub fsspec`... | false |
3,061,234,054 | https://api.github.com/repos/huggingface/datasets/issues/7569 | https://github.com/huggingface/datasets/issues/7569 | 7,569 | Dataset creation is broken if nesting a dict inside a dict inside a list | open | 2 | 2025-05-13T21:06:45 | 2025-05-20T19:25:15 | null | TimSchneider42 | [] | ### Describe the bug
Hey,
I noticed that the creation of datasets with `Dataset.from_generator` is broken if dicts and lists are nested in a certain way and a schema is being passed. See below for details.
Best,
Tim
### Steps to reproduce the bug
Runing this code:
```python
from datasets import Dataset, Features,... | false |
3,060,515,257 | https://api.github.com/repos/huggingface/datasets/issues/7568 | https://github.com/huggingface/datasets/issues/7568 | 7,568 | `IterableDatasetDict.map()` call removes `column_names` (in fact info.features) | open | 6 | 2025-05-13T15:45:42 | 2025-06-30T09:33:47 | null | mombip | [] | When calling `IterableDatasetDict.map()`, each split’s `IterableDataset.map()` is invoked without a `features` argument. While omitting the argument isn’t itself incorrect, the implementation then sets `info.features = features`, which destroys the original `features` content. Since `IterableDataset.column_names` relie... | false |
3,058,308,538 | https://api.github.com/repos/huggingface/datasets/issues/7567 | https://github.com/huggingface/datasets/issues/7567 | 7,567 | interleave_datasets seed with multiple workers | open | 7 | 2025-05-12T22:38:27 | 2025-06-29T06:53:59 | null | jonathanasdf | [] | ### Describe the bug
Using interleave_datasets with multiple dataloader workers and a seed set causes the same dataset sampling order across all workers.
Should the seed be modulated with the worker id?
### Steps to reproduce the bug
See above
### Expected behavior
See above
### Environment info
- `datasets` ve... | false |
3,055,279,344 | https://api.github.com/repos/huggingface/datasets/issues/7566 | https://github.com/huggingface/datasets/issues/7566 | 7,566 | terminate called without an active exception; Aborted (core dumped) | open | 4 | 2025-05-11T23:05:54 | 2025-06-23T17:56:02 | null | alexey-milovidov | [] | ### Describe the bug
I use it as in the tutorial here: https://huggingface.co/docs/datasets/stream, and it ends up with abort.
### Steps to reproduce the bug
1. `pip install datasets`
2.
```
$ cat main.py
#!/usr/bin/env python3
from datasets import load_dataset
dataset = load_dataset('HuggingFaceFW/fineweb', spl... | false |
3,051,731,207 | https://api.github.com/repos/huggingface/datasets/issues/7565 | https://github.com/huggingface/datasets/pull/7565 | 7,565 | add check if repo exists for dataset uploading | open | 2 | 2025-05-09T10:27:00 | 2025-06-09T14:39:23 | null | Samoed | [] | Currently, I'm reuploading datasets for [`MTEB`](https://github.com/embeddings-benchmark/mteb/). Some of them have many splits (more than 20), and I'm encountering the error:
`Too many requests for https://huggingface.co/datasets/repo/create`.
It seems that this issue occurs because the dataset tries to recreate it... | true |
3,049,275,226 | https://api.github.com/repos/huggingface/datasets/issues/7564 | https://github.com/huggingface/datasets/pull/7564 | 7,564 | Implementation of iteration over values of a column in an IterableDataset object | closed | 5 | 2025-05-08T14:59:22 | 2025-05-19T12:15:02 | 2025-05-19T12:15:02 | TopCoder2K | [] | Refers to [this issue](https://github.com/huggingface/datasets/issues/7381). | true |
3,046,351,253 | https://api.github.com/repos/huggingface/datasets/issues/7563 | https://github.com/huggingface/datasets/pull/7563 | 7,563 | set dev version | closed | 1 | 2025-05-07T15:18:29 | 2025-05-07T15:21:05 | 2025-05-07T15:18:36 | lhoestq | [] | null | true |
3,046,339,430 | https://api.github.com/repos/huggingface/datasets/issues/7562 | https://github.com/huggingface/datasets/pull/7562 | 7,562 | release: 3.6.0 | closed | 1 | 2025-05-07T15:15:13 | 2025-05-07T15:17:46 | 2025-05-07T15:15:21 | lhoestq | [] | null | true |
3,046,302,653 | https://api.github.com/repos/huggingface/datasets/issues/7561 | https://github.com/huggingface/datasets/issues/7561 | 7,561 | NotImplementedError: <class 'datasets.iterable_dataset.RepeatExamplesIterable'> doesn't implement num_shards yet | closed | 0 | 2025-05-07T15:05:42 | 2025-06-05T12:41:30 | 2025-06-05T12:41:30 | cyanic-selkie | [] | ### Describe the bug
When using `.repeat()` on an `IterableDataset`, this error gets thrown. There is [this thread](https://discuss.huggingface.co/t/making-an-infinite-iterabledataset/146192/5) that seems to imply the fix is trivial, but I don't know anything about this codebase, so I'm opening this issue rather than ... | false |
3,046,265,500 | https://api.github.com/repos/huggingface/datasets/issues/7560 | https://github.com/huggingface/datasets/pull/7560 | 7,560 | fix decoding tests | closed | 1 | 2025-05-07T14:56:14 | 2025-05-07T14:59:02 | 2025-05-07T14:56:20 | lhoestq | [] | null | true |
3,046,177,078 | https://api.github.com/repos/huggingface/datasets/issues/7559 | https://github.com/huggingface/datasets/pull/7559 | 7,559 | fix aiohttp import | closed | 1 | 2025-05-07T14:31:32 | 2025-05-07T14:34:34 | 2025-05-07T14:31:38 | lhoestq | [] | null | true |
3,046,066,628 | https://api.github.com/repos/huggingface/datasets/issues/7558 | https://github.com/huggingface/datasets/pull/7558 | 7,558 | fix regression | closed | 1 | 2025-05-07T13:56:03 | 2025-05-07T13:58:52 | 2025-05-07T13:56:18 | lhoestq | [] | reported in https://github.com/huggingface/datasets/pull/7557 (I just reorganized the condition)
wanted to apply this change to the original PR but github didn't let me apply it directly - merging this one instead | true |
3,045,962,076 | https://api.github.com/repos/huggingface/datasets/issues/7557 | https://github.com/huggingface/datasets/pull/7557 | 7,557 | check for empty _formatting | closed | 1 | 2025-05-07T13:22:37 | 2025-05-07T13:57:12 | 2025-05-07T13:57:12 | winglian | [] | Fixes a regression from #7553 breaking shuffling of iterable datasets
<img width="884" alt="Screenshot 2025-05-07 at 9 16 52 AM" src="https://github.com/user-attachments/assets/d2f43c5f-4092-4efe-ac31-a32cbd025fe3" />
| true |
3,043,615,210 | https://api.github.com/repos/huggingface/datasets/issues/7556 | https://github.com/huggingface/datasets/pull/7556 | 7,556 | Add `--merge-pull-request` option for `convert_to_parquet` | closed | 2 | 2025-05-06T18:05:05 | 2025-07-18T19:09:10 | 2025-07-18T19:09:10 | klamike | [] | Closes #7527
Note that this implementation **will only merge the last PR in the case that they get split up by `push_to_hub`**. See https://github.com/huggingface/datasets/discussions/7555 for more details. | true |
3,043,089,844 | https://api.github.com/repos/huggingface/datasets/issues/7554 | https://github.com/huggingface/datasets/issues/7554 | 7,554 | datasets downloads and generates all splits, even though a single split is requested (for dataset with loading script) | closed | 2 | 2025-05-06T14:43:38 | 2025-05-07T14:53:45 | 2025-05-07T14:53:44 | sei-eschwartz | [] | ### Describe the bug
`datasets` downloads and generates all splits, even though a single split is requested. [This](https://huggingface.co/datasets/jordiae/exebench) is the dataset in question. It uses a loading script. I am not 100% sure that this is a bug, because maybe with loading scripts `datasets` must actual... | false |
3,042,953,907 | https://api.github.com/repos/huggingface/datasets/issues/7553 | https://github.com/huggingface/datasets/pull/7553 | 7,553 | Rebatch arrow iterables before formatted iterable | closed | 2 | 2025-05-06T13:59:58 | 2025-05-07T13:17:41 | 2025-05-06T14:03:42 | lhoestq | [] | close https://github.com/huggingface/datasets/issues/7538 and https://github.com/huggingface/datasets/issues/7475 | true |
3,040,258,084 | https://api.github.com/repos/huggingface/datasets/issues/7552 | https://github.com/huggingface/datasets/pull/7552 | 7,552 | Enable xet in push to hub | closed | 1 | 2025-05-05T17:02:09 | 2025-05-06T12:42:51 | 2025-05-06T12:42:48 | lhoestq | [] | follows https://github.com/huggingface/huggingface_hub/pull/3035
related to https://github.com/huggingface/datasets/issues/7526 | true |
3,038,114,928 | https://api.github.com/repos/huggingface/datasets/issues/7551 | https://github.com/huggingface/datasets/issues/7551 | 7,551 | Issue with offline mode and partial dataset cached | open | 4 | 2025-05-04T16:49:37 | 2025-05-13T03:18:43 | null | nrv | [] | ### Describe the bug
Hi,
a issue related to #4760 here when loading a single file from a dataset, unable to access it in offline mode afterwards
### Steps to reproduce the bug
```python
import os
# os.environ["HF_HUB_OFFLINE"] = "1"
os.environ["HF_TOKEN"] = "xxxxxxxxxxxxxx"
import datasets
dataset_name = "uonlp/... | false |
3,037,017,367 | https://api.github.com/repos/huggingface/datasets/issues/7550 | https://github.com/huggingface/datasets/pull/7550 | 7,550 | disable aiohttp depend for python 3.13t free-threading compat | closed | 0 | 2025-05-03T00:28:18 | 2025-05-03T00:28:24 | 2025-05-03T00:28:24 | Qubitium | [] | null | true |
3,036,272,015 | https://api.github.com/repos/huggingface/datasets/issues/7549 | https://github.com/huggingface/datasets/issues/7549 | 7,549 | TypeError: Couldn't cast array of type string to null on webdataset format dataset | open | 1 | 2025-05-02T15:18:07 | 2025-05-02T15:37:05 | null | narugo1992 | [] | ### Describe the bug
```python
from datasets import load_dataset
dataset = load_dataset("animetimm/danbooru-wdtagger-v4-w640-ws-30k")
```
got
```
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/arrow_writer.py", line 626, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarro... | false |
3,035,568,851 | https://api.github.com/repos/huggingface/datasets/issues/7548 | https://github.com/huggingface/datasets/issues/7548 | 7,548 | Python 3.13t (free threads) Compat | open | 7 | 2025-05-02T09:20:09 | 2025-05-12T15:11:32 | null | Qubitium | [] | ### Describe the bug
Cannot install `datasets` under `python 3.13t` due to dependency on `aiohttp` and aiohttp cannot be built for free-threading python.
The `free threading` support issue in `aiothttp` is active since August 2024! Ouch.
https://github.com/aio-libs/aiohttp/issues/8796#issue-2475941784
`pip install... | false |
3,034,830,291 | https://api.github.com/repos/huggingface/datasets/issues/7547 | https://github.com/huggingface/datasets/pull/7547 | 7,547 | Avoid global umask for setting file mode. | closed | 1 | 2025-05-01T22:24:24 | 2025-05-06T13:05:00 | 2025-05-06T13:05:00 | ryan-clancy | [] | This PR updates the method for setting the permissions on `cache_path` after calling `shutil.move`. The call to `shutil.move` may not preserve permissions if the source and destination are on different filesystems. Reading and resetting umask can cause race conditions, so directly read what permissions were set for the... | true |
3,034,018,298 | https://api.github.com/repos/huggingface/datasets/issues/7546 | https://github.com/huggingface/datasets/issues/7546 | 7,546 | Large memory use when loading large datasets to a ZFS pool | closed | 4 | 2025-05-01T14:43:47 | 2025-05-13T13:30:09 | 2025-05-13T13:29:53 | FredHaa | [] | ### Describe the bug
When I load large parquet based datasets from the hub like `MLCommons/peoples_speech` using `load_dataset`, all my memory (500GB) is used and isn't released after loading, meaning that the process is terminated by the kernel if I try to load an additional dataset. This makes it impossible to train... | false |
3,031,617,547 | https://api.github.com/repos/huggingface/datasets/issues/7545 | https://github.com/huggingface/datasets/issues/7545 | 7,545 | Networked Pull Through Cache | open | 0 | 2025-04-30T15:16:33 | 2025-04-30T15:16:33 | null | wrmedford | [
"enhancement"
] | ### Feature request
Introduce a HF_DATASET_CACHE_NETWORK_LOCATION configuration (e.g. an environment variable) together with a companion network cache service.
Enable a three-tier cache lookup for datasets:
1. Local on-disk cache
2. Configurable network cache proxy
3. Official Hugging Face Hub
### Motivation
- Dis... | false |
3,027,024,285 | https://api.github.com/repos/huggingface/datasets/issues/7544 | https://github.com/huggingface/datasets/pull/7544 | 7,544 | Add try_original_type to DatasetDict.map | closed | 3 | 2025-04-29T04:39:44 | 2025-05-05T14:42:49 | 2025-05-05T14:42:49 | yoshitomo-matsubara | [] | This PR resolves #7472 for DatasetDict
The previously merged PR #7483 added `try_original_type` to ArrowDataset, but DatasetDict misses `try_original_type`
Cc: @lhoestq | true |
3,026,867,706 | https://api.github.com/repos/huggingface/datasets/issues/7543 | https://github.com/huggingface/datasets/issues/7543 | 7,543 | The memory-disk mapping failure issue of the map function(resolved, but there are some suggestions.) | closed | 0 | 2025-04-29T03:04:59 | 2025-04-30T02:22:17 | 2025-04-30T02:22:17 | jxma20 | [] | ### Describe the bug
## bug
When the map function processes a large dataset, it temporarily stores the data in a cache file on the disk. After the data is stored, the memory occupied by it is released. Therefore, when using the map function to process a large-scale dataset, only a dataset space of the size of `writer_... | false |
3,025,054,630 | https://api.github.com/repos/huggingface/datasets/issues/7542 | https://github.com/huggingface/datasets/pull/7542 | 7,542 | set dev version | closed | 1 | 2025-04-28T14:03:48 | 2025-04-28T14:08:37 | 2025-04-28T14:04:00 | lhoestq | [] | null | true |
3,025,045,919 | https://api.github.com/repos/huggingface/datasets/issues/7541 | https://github.com/huggingface/datasets/pull/7541 | 7,541 | release: 3.5.1 | closed | 1 | 2025-04-28T14:00:59 | 2025-04-28T14:03:38 | 2025-04-28T14:01:54 | lhoestq | [] | null | true |
3,024,862,966 | https://api.github.com/repos/huggingface/datasets/issues/7540 | https://github.com/huggingface/datasets/pull/7540 | 7,540 | support pyarrow 20 | closed | 1 | 2025-04-28T13:01:11 | 2025-04-28T13:23:53 | 2025-04-28T13:23:52 | lhoestq | [] | fix
```
TypeError: ArrayExtensionArray.to_pylist() got an unexpected keyword argument 'maps_as_pydicts'
``` | true |
3,023,311,163 | https://api.github.com/repos/huggingface/datasets/issues/7539 | https://github.com/huggingface/datasets/pull/7539 | 7,539 | Fix IterableDataset state_dict shard_example_idx counting | closed | 2 | 2025-04-27T20:41:18 | 2025-05-06T14:24:25 | 2025-05-06T14:24:24 | Harry-Yang0518 | [] | # Fix IterableDataset's state_dict shard_example_idx reporting
## Description
This PR fixes issue #7475 where the `shard_example_idx` value in `IterableDataset`'s `state_dict()` always equals the number of samples in a shard, even if only a few examples have been consumed.
The issue is in the `_iter_arrow` met... | true |
3,023,280,056 | https://api.github.com/repos/huggingface/datasets/issues/7538 | https://github.com/huggingface/datasets/issues/7538 | 7,538 | `IterableDataset` drops samples when resuming from a checkpoint | closed | 1 | 2025-04-27T19:34:49 | 2025-05-06T14:04:05 | 2025-05-06T14:03:42 | mariosasko | [
"bug"
] | When resuming from a checkpoint, `IterableDataset` will drop samples if `num_shards % world_size == 0` and the underlying example supports `iter_arrow` and needs to be formatted.
In that case, the `FormattedExamplesIterable` fetches a batch of samples from the child iterable's `iter_arrow` and yields them one by one ... | false |
3,018,792,966 | https://api.github.com/repos/huggingface/datasets/issues/7537 | https://github.com/huggingface/datasets/issues/7537 | 7,537 | `datasets.map(..., num_proc=4)` multi-processing fails | open | 1 | 2025-04-25T01:53:47 | 2025-05-06T13:12:08 | null | faaany | [] | The following code fails in python 3.11+
```python
tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])
```
Error log:
```bash
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/multiprocess/process.py", line 315, in _bootstrap
self.ru... | false |
3,018,425,549 | https://api.github.com/repos/huggingface/datasets/issues/7536 | https://github.com/huggingface/datasets/issues/7536 | 7,536 | [Errno 13] Permission denied: on `.incomplete` file | closed | 4 | 2025-04-24T20:52:45 | 2025-05-06T13:05:01 | 2025-05-06T13:05:01 | ryan-clancy | [] | ### Describe the bug
When downloading a dataset, we frequently hit the below Permission Denied error. This looks to happen (at least) across datasets in HF, S3, and GCS.
It looks like the `temp_file` being passed [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L412) can somet... | false |
3,018,289,872 | https://api.github.com/repos/huggingface/datasets/issues/7535 | https://github.com/huggingface/datasets/pull/7535 | 7,535 | Change dill version in requirements | open | 1 | 2025-04-24T19:44:28 | 2025-05-19T14:51:29 | null | JGrel | [] | Change dill version to >=0.3.9,<0.4.5 and check for errors | true |
3,017,259,407 | https://api.github.com/repos/huggingface/datasets/issues/7534 | https://github.com/huggingface/datasets/issues/7534 | 7,534 | TensorFlow RaggedTensor Support (batch-level) | open | 4 | 2025-04-24T13:14:52 | 2025-06-30T17:03:39 | null | Lundez | [
"enhancement"
] | ### Feature request
Hi,
Currently datasets does not support RaggedTensor output on batch-level.
When building a Object Detection Dataset (with TensorFlow) I need to enable RaggedTensors as that's how BBoxes & classes are expected from the Keras Model POV.
Currently there's a error thrown saying that "Nested Data is ... | false |
3,015,075,086 | https://api.github.com/repos/huggingface/datasets/issues/7533 | https://github.com/huggingface/datasets/pull/7533 | 7,533 | Add custom fingerprint support to `from_generator` | open | 3 | 2025-04-23T19:31:35 | 2025-07-10T09:29:35 | null | simonreise | [] | This PR adds `dataset_id_suffix` parameter to 'Dataset.from_generator' function.
`Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including generator function itself. `BuilderConfig.create_config_id` function tries to hash all the args, which can take a large amount ... | true |
3,009,546,204 | https://api.github.com/repos/huggingface/datasets/issues/7532 | https://github.com/huggingface/datasets/pull/7532 | 7,532 | Document the HF_DATASETS_CACHE environment variable in the datasets cache documentation | closed | 3 | 2025-04-22T00:23:13 | 2025-05-06T15:54:38 | 2025-05-06T15:54:38 | Harry-Yang0518 | [] |
This pull request updates the Datasets documentation to include the `HF_DATASETS_CACHE` environment variable. While the current documentation only mentions `HF_HOME` for overriding the default cache directory, `HF_DATASETS_CACHE` is also a supported and useful option for specifying a custom cache location for dataset... | true |
3,008,914,887 | https://api.github.com/repos/huggingface/datasets/issues/7531 | https://github.com/huggingface/datasets/issues/7531 | 7,531 | Deepspeed reward training hangs at end of training with Dataset.from_list | open | 2 | 2025-04-21T17:29:20 | 2025-06-29T06:20:45 | null | Matt00n | [] | There seems to be a weird interaction between Deepspeed, the Dataset.from_list method and trl's RewardTrainer. On a multi-GPU setup (10 A100s), training always hangs at the very end of training until it times out. The training itself works fine until the end of training and running the same script with Deepspeed on a s... | false |
3,007,452,499 | https://api.github.com/repos/huggingface/datasets/issues/7530 | https://github.com/huggingface/datasets/issues/7530 | 7,530 | How to solve "Spaces stuck in Building" problems | closed | 3 | 2025-04-21T03:08:38 | 2025-04-22T07:49:52 | 2025-04-22T07:49:52 | ghost | [] | ### Describe the bug
Public spaces may stuck in Building after restarting, error log as follows:
build error
Unexpected job error
ERROR: failed to push spaces-registry.huggingface.tech/spaces/*:cpu-*-*: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-*: 401... | false |
3,007,118,969 | https://api.github.com/repos/huggingface/datasets/issues/7529 | https://github.com/huggingface/datasets/issues/7529 | 7,529 | audio folder builder cannot detect custom split name | open | 0 | 2025-04-20T16:53:21 | 2025-04-20T16:53:21 | null | phineas-pta | [] | ### Describe the bug
when using audio folder builder (`load_dataset("audiofolder", data_dir="/path/to/folder")`), it cannot detect custom split name other than train/validation/test
### Steps to reproduce the bug
i have the following folder structure
```
my_dataset/
├── train/
│ ├── lorem.wav
│ ├── …
│ └── met... | false |
3,006,433,485 | https://api.github.com/repos/huggingface/datasets/issues/7528 | https://github.com/huggingface/datasets/issues/7528 | 7,528 | Data Studio Error: Convert JSONL incorrectly | open | 1 | 2025-04-19T13:21:44 | 2025-05-06T13:18:38 | null | zxccade | [] | ### Describe the bug
Hi there,
I uploaded a dataset here https://huggingface.co/datasets/V-STaR-Bench/V-STaR, but I found that Data Studio incorrectly convert the "bboxes" value for the whole dataset. Therefore, anyone who downloaded the dataset via the API would get the wrong "bboxes" value in the data file.
Could ... | false |
3,005,242,422 | https://api.github.com/repos/huggingface/datasets/issues/7527 | https://github.com/huggingface/datasets/issues/7527 | 7,527 | Auto-merge option for `convert-to-parquet` | closed | 4 | 2025-04-18T16:03:22 | 2025-07-18T19:09:03 | 2025-07-18T19:09:03 | klamike | [
"enhancement"
] | ### Feature request
Add a command-line option, e.g. `--auto-merge-pull-request` that enables automatic merging of the commits created by the `convert-to-parquet` tool.
### Motivation
Large datasets may result in dozens of PRs due to the splitting mechanism. Each of these has to be manually accepted via the website.
... | false |
3,005,107,536 | https://api.github.com/repos/huggingface/datasets/issues/7526 | https://github.com/huggingface/datasets/issues/7526 | 7,526 | Faster downloads/uploads with Xet storage | open | 0 | 2025-04-18T14:46:42 | 2025-05-12T12:09:09 | null | lhoestq | [] | 
## Xet is out !
Over the past few weeks, Hugging Face’s [Xet Team](https://huggingface.co/xet-team) took a major step forward by [migrating the first Model and Dataset repositories off LFS and to Xet storage](https://huggingface... | false |
3,003,032,248 | https://api.github.com/repos/huggingface/datasets/issues/7525 | https://github.com/huggingface/datasets/pull/7525 | 7,525 | Fix indexing in split commit messages | closed | 1 | 2025-04-17T17:06:26 | 2025-04-28T14:26:27 | 2025-04-28T14:26:27 | klamike | [] | When a large commit is split up, it seems the commit index in the message is zero-based while the total number is one-based. I came across this running `convert-to-parquet` and was wondering why there was no `6-of-6` commit. This PR fixes that by adding one to the commit index, so both are one-based.
Current behavio... | true |
3,002,067,826 | https://api.github.com/repos/huggingface/datasets/issues/7524 | https://github.com/huggingface/datasets/pull/7524 | 7,524 | correct use with polars example | closed | 0 | 2025-04-17T10:19:19 | 2025-04-28T13:48:34 | 2025-04-28T13:48:33 | SiQube | [] | null | true |
2,999,616,692 | https://api.github.com/repos/huggingface/datasets/issues/7523 | https://github.com/huggingface/datasets/pull/7523 | 7,523 | mention av in video docs | closed | 1 | 2025-04-16T13:11:12 | 2025-04-16T13:13:45 | 2025-04-16T13:11:42 | lhoestq | [] | null | true |
2,998,169,017 | https://api.github.com/repos/huggingface/datasets/issues/7522 | https://github.com/huggingface/datasets/pull/7522 | 7,522 | Preserve formatting in concatenated IterableDataset | closed | 1 | 2025-04-16T02:37:33 | 2025-05-19T15:07:38 | 2025-05-19T15:07:37 | francescorubbo | [] | Fixes #7515 | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.