url string | repository_url string | labels_url string | comments_url string | events_url string | html_url string | id int64 | node_id string | number int64 | title string | user dict | labels list | state string | locked bool | assignees list | milestone null | comments list | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | assignee dict | author_association string | type null | active_lock_reason null | draft bool | pull_request dict | body string | closed_by dict | reactions dict | timeline_url string | performed_via_github_app null | state_reason string | sub_issues_summary dict | issue_dependencies_summary dict | pinned_comment null | is_pull_request bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/8049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8049/comments | https://api.github.com/repos/huggingface/datasets/issues/8049/events | https://github.com/huggingface/datasets/pull/8049 | 4,032,135,159 | PR_kwDODunzps7IZR_m | 8,049 | Fix typos in iterable_dataset.py | {
"login": "omkar-334",
"id": 40126336,
"node_id": "MDQ6VXNlcjQwMTI2MzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/40126336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omkar-334",
"html_url": "https://github.com/omkar-334",
"followers_url": "https://api.github.com/users/omkar-334/followers",
"following_url": "https://api.github.com/users/omkar-334/following{/other_user}",
"gists_url": "https://api.github.com/users/omkar-334/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omkar-334/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omkar-334/subscriptions",
"organizations_url": "https://api.github.com/users/omkar-334/orgs",
"repos_url": "https://api.github.com/users/omkar-334/repos",
"events_url": "https://api.github.com/users/omkar-334/events{/privacy}",
"received_events_url": "https://api.github.com/users/omkar-334/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [] | 2026-03-06T04:20:52 | 2026-03-06T04:21:10 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8049",
"html_url": "https://github.com/huggingface/datasets/pull/8049",
"diff_url": "https://github.com/huggingface/datasets/pull/8049.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8049.patch",
"merged_at": null
} | Fixes #8038 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8049/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8048/comments | https://api.github.com/repos/huggingface/datasets/issues/8048/events | https://github.com/huggingface/datasets/issues/8048 | 4,031,613,102 | I_kwDODunzps7wTYiu | 8,048 | Support 3D Datasets | {
"login": "Vinay-Umrethe",
"id": 175500353,
"node_id": "U_kgDOCnXsQQ",
"avatar_url": "https://avatars.githubusercontent.com/u/175500353?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vinay-Umrethe",
"html_url": "https://github.com/Vinay-Umrethe",
"followers_url": "https://api.github.com/users/Vinay-Umrethe/followers",
"following_url": "https://api.github.com/users/Vinay-Umrethe/following{/other_user}",
"gists_url": "https://api.github.com/users/Vinay-Umrethe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vinay-Umrethe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vinay-Umrethe/subscriptions",
"organizations_url": "https://api.github.com/users/Vinay-Umrethe/orgs",
"repos_url": "https://api.github.com/users/Vinay-Umrethe/repos",
"events_url": "https://api.github.com/users/Vinay-Umrethe/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vinay-Umrethe/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | [] | null | [] | 2026-03-06T01:23:08 | 2026-03-06T11:57:06 | null | null | NONE | null | null | null | null | ### Feature request
In `datasets` library we have options to load `image` and `audio` datasets which can be viewed in the Dataset Viewer.
LIKE:
```bash
from datasets import load_dataset, Image
from datasets import load_dataset, Audio
```
> https://huggingface.co/docs/datasets/en/image_load & https://huggingface.co/docs/datasets/en/audio_process
---
I guess there's a need for supporting 3D datasets as well for *storing* & *loading* 3D object files like (`.glb, .ply, .obj` etc...) maybe with something like:
```bash
from datasets import load_dataset, Mesh # or `_3D` or whatever is naming convention
```
### Motivation
Multi-Modal datasets are rapidly increasing with modalites like 3D, Video etc, `datasets` already support experimental `Video` import.
I have a dataset where I've stored everything as dict so it atleast look balanced across all those 4 columns and 4 modalites It have (Text, Image, Audio, 3D Mesh) but it don't look that good since RAW binary bytes are visible. And Needs to be decoded for that specific field containing the binary data.
### Your contribution
Perhaps I could have made a PR for this, but since things might not get much attention here, I shouldn't...
Contributors can try to support it so the the huggingface server is not at heavy duty of loading preview images for each mesh in the dataset viewer that can crash. So a cool svg Icon saying 3D or Mesh inplace of preview can be good. OR they can still try to render an image from those 3D data and use it in preview place just like Image/Video.
Edit:
SO Working on PR for this, gonna push soon...
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8048/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | null | false |
https://api.github.com/repos/huggingface/datasets/issues/8047 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8047/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8047/comments | https://api.github.com/repos/huggingface/datasets/issues/8047/events | https://github.com/huggingface/datasets/pull/8047 | 4,030,434,134 | PR_kwDODunzps7IUKWg | 8,047 | fix: handle nested null types in feature alignment for multi-proc map | {
"login": "ain-soph",
"id": 13214530,
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ain-soph",
"html_url": "https://github.com/ain-soph",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [] | 2026-03-05T20:28:50 | 2026-03-05T20:29:41 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8047",
"html_url": "https://github.com/huggingface/datasets/pull/8047",
"diff_url": "https://github.com/huggingface/datasets/pull/8047.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8047.patch",
"merged_at": null
} | When using `Dataset.map()` with `num_proc > 1`, shards containing only empty lists get their features inferred as `List(Value('null'))`, which is incompatible with `List(Struct({...}))` from shards with actual data.
The existing `_check_if_features_can_be_aligned` and `_align_features` only handled top-level `Value('null')` compatibility. This commit adds a `_is_null_feature()` helper that recursively checks for null types inside container features (`List, LargeList, Sequence`), and uses it in both alignment functions.
Fixes #8046 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8047/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8046 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8046/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8046/comments | https://api.github.com/repos/huggingface/datasets/issues/8046/events | https://github.com/huggingface/datasets/issues/8046 | 4,030,348,328 | I_kwDODunzps7wOjwo | 8,046 | `Dataset.map()` with `num_proc > 1` fails when some shards produce `List(Value('null'))` (empty lists) | {
"login": "ain-soph",
"id": 13214530,
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ain-soph",
"html_url": "https://github.com/ain-soph",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [
"\n### Root cause\n\nIn `src/datasets/features/features.py`, both `_check_if_features_can_be_aligned` and `_align_features` use:\n\n```python\nisinstance(v, Value) and v.dtype == \"null\"\n```\n\nto check for null compatibility. However, `List(Value('null'))` is **not** an instance of `Value` — it's an instance of ... | 2026-03-05T20:12:13 | 2026-03-05T20:12:35 | null | null | NONE | null | null | null | null | ### Describe the bug
When using `Dataset.map()` with `num_proc > 1`, if the map function produces a list column where some shards contain only empty lists `[]` and other shards contain non-empty lists of structs, the feature type inference is inconsistent across shards:
- Shards with only empty lists → `List(Value('null'))`
- Shards with non-empty lists → `List(Struct({...}))`
During shard concatenation, `_check_if_features_can_be_aligned` raises a `ValueError` because it only handles top-level `Value("null")` compatibility but does **not** handle nested null types like `List(Value('null'))`.
Setting `num_proc=1` works correctly because all data is processed in a single shard.
### Steps to reproduce the bug
```python
import datasets
print(datasets.__version__) # 4.6.1
data = {
"id": [f"item_{i}" for i in range(8)],
"labels": [
[{"type": "A", "word": "x", "start": 0, "end": 5}], # only first item has data
[], [], [], [], [], [], [] # rest are empty
]
}
ds = datasets.Dataset.from_dict(data)
# Works fine with num_proc=1
result = ds.map(lambda x: {"label": x["labels"]}, num_proc=1)
print(result.features) # OK
# Fails with num_proc=2
result = ds.map(lambda x: {"label": x["labels"]}, num_proc=2)
# ValueError: The features can't be aligned because the key label of features
# ... has unexpected type - List(Value('null'))
# (expected either List({'end': Value('int64'), ...}) or Value("null").
```
### Expected behavior
`List(Value('null'))` should be treated as compatible with any `List(...)` type during feature alignment, similar to how `Value('null')` is already treated as compatible with any other type.
### Environment info
- `datasets` version: 4.6.1 (also reproduced on 4.1.1)
- Python version: 3.12.0
- OS: Linux | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8046/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | null | false |
https://api.github.com/repos/huggingface/datasets/issues/8045 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8045/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8045/comments | https://api.github.com/repos/huggingface/datasets/issues/8045/events | https://github.com/huggingface/datasets/pull/8045 | 4,028,881,818 | PR_kwDODunzps7IPA1N | 8,045 | Write IterableDataset to parquet incrementally instead of materializing entire shard in memory | {
"login": "HaukurPall",
"id": 9451463,
"node_id": "MDQ6VXNlcjk0NTE0NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9451463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HaukurPall",
"html_url": "https://github.com/HaukurPall",
"followers_url": "https://api.github.com/users/HaukurPall/followers",
"following_url": "https://api.github.com/users/HaukurPall/following{/other_user}",
"gists_url": "https://api.github.com/users/HaukurPall/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HaukurPall/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HaukurPall/subscriptions",
"organizations_url": "https://api.github.com/users/HaukurPall/orgs",
"repos_url": "https://api.github.com/users/HaukurPall/repos",
"events_url": "https://api.github.com/users/HaukurPall/events{/privacy}",
"received_events_url": "https://api.github.com/users/HaukurPall/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [
"I've tested this change along with the changes from my other PRs (https://github.com/HaukurPall/datasets/tree/all-fixes) and this change **dramatically** reduces the memory requirement per process.\r\n\r\nContext:\r\n\r\nI'm working with audio datasets, the audio is embedded in the parquet. When loading a 1GB shar... | 2026-03-05T15:13:53 | 2026-03-06T11:47:00 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8045",
"html_url": "https://github.com/huggingface/datasets/pull/8045",
"diff_url": "https://github.com/huggingface/datasets/pull/8045.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8045.patch",
"merged_at": null
} | `IterableDataset.to_parquet()` materialized the entire shard into memory via
`pa.concat_tables(list(...))` before writing. For datasets with large shards and multiprocessing (push_to_hub with num_proc > 0) the memory requirements can be really large.
This PR writes parquet incrementally by iterating Arrow batches and writing each
directly to `pq.ParquetWriter`, reducing the memory requirement from shard size to something around row group size (with lots of caveats).
**Note**
A thing to note is that this code path is only executed when `features` are set on the `IterableDataset`, in f.ex. `map` or the dataset is `cast` before `push_to_hub`.
Changes:
- Extend `ParquetDatasetWriter` to accept both `Dataset` and `IterableDataset`.
- `IterableDataset.to_parquet()` now delegates directly to `ParquetDatasetWriter`
- Use `pq.ParquetWriter` as a context manager for proper cleanup on errors
- Falls back to materialization only when `features` is `None` (schema required upfront)
That being said, I'm not entirely convinced that extending the ParquetDatasetWriter to support IterableDataset is the correct approach or even if there are other footguns related to CDC or other stuff. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8045/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8044 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8044/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8044/comments | https://api.github.com/repos/huggingface/datasets/issues/8044/events | https://github.com/huggingface/datasets/pull/8044 | 4,028,042,275 | PR_kwDODunzps7IMNl4 | 8,044 | Fix silent data loss in push_to_hub when num_proc > num_shards | {
"login": "HaukurPall",
"id": 9451463,
"node_id": "MDQ6VXNlcjk0NTE0NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9451463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HaukurPall",
"html_url": "https://github.com/HaukurPall",
"followers_url": "https://api.github.com/users/HaukurPall/followers",
"following_url": "https://api.github.com/users/HaukurPall/following{/other_user}",
"gists_url": "https://api.github.com/users/HaukurPall/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HaukurPall/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HaukurPall/subscriptions",
"organizations_url": "https://api.github.com/users/HaukurPall/orgs",
"repos_url": "https://api.github.com/users/HaukurPall/repos",
"events_url": "https://api.github.com/users/HaukurPall/events{/privacy}",
"received_events_url": "https://api.github.com/users/HaukurPall/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [] | 2026-03-05T12:52:39 | 2026-03-05T12:52:39 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8044",
"html_url": "https://github.com/huggingface/datasets/pull/8044",
"diff_url": "https://github.com/huggingface/datasets/pull/8044.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8044.patch",
"merged_at": null
} | ## Summary
When `num_proc` exceeds the number of output shards in `Dataset.push_to_hub`, workers
without assigned output shards silently drop their data. The metadata still reports the
correct dataset length (using `len(self[split])`), masking the data loss.
This fix caps `num_jobs` at `num_shards` in `_push_parquet_shards_to_hub`, matching
what the streaming path already does.
## Reproduction
1. Have a dataset with a small number of shards (e.g. 2)
2. Call `push_to_hub` with `num_proc > num_shards` (e.g. 6)
3. Only a fraction of the samples are actually written to the hub | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8044/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8043 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8043/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8043/comments | https://api.github.com/repos/huggingface/datasets/issues/8043/events | https://github.com/huggingface/datasets/pull/8043 | 4,027,936,170 | PR_kwDODunzps7IL3IJ | 8,043 | Add features parameter to IterableDatasetDict.map | {
"login": "HaukurPall",
"id": 9451463,
"node_id": "MDQ6VXNlcjk0NTE0NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9451463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HaukurPall",
"html_url": "https://github.com/HaukurPall",
"followers_url": "https://api.github.com/users/HaukurPall/followers",
"following_url": "https://api.github.com/users/HaukurPall/following{/other_user}",
"gists_url": "https://api.github.com/users/HaukurPall/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HaukurPall/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HaukurPall/subscriptions",
"organizations_url": "https://api.github.com/users/HaukurPall/orgs",
"repos_url": "https://api.github.com/users/HaukurPall/repos",
"events_url": "https://api.github.com/users/HaukurPall/events{/privacy}",
"received_events_url": "https://api.github.com/users/HaukurPall/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [] | 2026-03-05T12:30:46 | 2026-03-05T12:30:46 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8043",
"html_url": "https://github.com/huggingface/datasets/pull/8043",
"diff_url": "https://github.com/huggingface/datasets/pull/8043.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8043.patch",
"merged_at": null
} | IterableDataset.map accepts a features parameter to declare the output schema, but IterableDatasetDict.map did not expose it. This meant users of IterableDatasetDict had no way to preserve feature metadata through map operations. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8043/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8042 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8042/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8042/comments | https://api.github.com/repos/huggingface/datasets/issues/8042/events | https://github.com/huggingface/datasets/pull/8042 | 4,027,903,841 | PR_kwDODunzps7ILwT5 | 8,042 | Fix schema enforcement in streaming _convert_to_arrow | {
"login": "HaukurPall",
"id": 9451463,
"node_id": "MDQ6VXNlcjk0NTE0NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9451463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HaukurPall",
"html_url": "https://github.com/HaukurPall",
"followers_url": "https://api.github.com/users/HaukurPall/followers",
"following_url": "https://api.github.com/users/HaukurPall/following{/other_user}",
"gists_url": "https://api.github.com/users/HaukurPall/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HaukurPall/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HaukurPall/subscriptions",
"organizations_url": "https://api.github.com/users/HaukurPall/orgs",
"repos_url": "https://api.github.com/users/HaukurPall/repos",
"events_url": "https://api.github.com/users/HaukurPall/events{/privacy}",
"received_events_url": "https://api.github.com/users/HaukurPall/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [] | 2026-03-05T12:23:47 | 2026-03-05T12:23:47 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8042",
"html_url": "https://github.com/huggingface/datasets/pull/8042",
"diff_url": "https://github.com/huggingface/datasets/pull/8042.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8042.patch",
"merged_at": null
} | When the iter_arrow chain is broken (e.g. after .map() without with_format("arrow")), _convert_to_arrow falls back to pa.Table.from_pylist() which infers types from values. If early batches contain None or empty lists, Arrow infers null/list<null> types that conflict with later batches containing real data, causing ArrowInvalid schema mismatch errors.
Add an optional features parameter to _convert_to_arrow that applies cast_table_to_features to each produced table, mirroring what ArrowWriter.write_table does in the map-style path. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8042/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8042/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8041/comments | https://api.github.com/repos/huggingface/datasets/issues/8041/events | https://github.com/huggingface/datasets/pull/8041 | 4,027,152,089 | PR_kwDODunzps7IJPKo | 8,041 | Use num_examples instead of len(self) for iterable_dataset's SplitInfo | {
"login": "HaukurPall",
"id": 9451463,
"node_id": "MDQ6VXNlcjk0NTE0NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9451463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HaukurPall",
"html_url": "https://github.com/HaukurPall",
"followers_url": "https://api.github.com/users/HaukurPall/followers",
"following_url": "https://api.github.com/users/HaukurPall/following{/other_user}",
"gists_url": "https://api.github.com/users/HaukurPall/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HaukurPall/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HaukurPall/subscriptions",
"organizations_url": "https://api.github.com/users/HaukurPall/orgs",
"repos_url": "https://api.github.com/users/HaukurPall/repos",
"events_url": "https://api.github.com/users/HaukurPall/events{/privacy}",
"received_events_url": "https://api.github.com/users/HaukurPall/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [] | 2026-03-05T09:54:26 | 2026-03-05T09:54:26 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8041",
"html_url": "https://github.com/huggingface/datasets/pull/8041",
"diff_url": "https://github.com/huggingface/datasets/pull/8041.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8041.patch",
"merged_at": null
} | Fixes a bug (crash) when pushing individual splits to a repo which has pre-existing data. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8041/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8040 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8040/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8040/comments | https://api.github.com/repos/huggingface/datasets/issues/8040/events | https://github.com/huggingface/datasets/pull/8040 | 4,025,912,223 | PR_kwDODunzps7IFFnG | 8,040 | Fix the logic for allowed extensions when creating build configurations (#8034) | {
"login": "Nexround",
"id": 12998729,
"node_id": "MDQ6VXNlcjEyOTk4NzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/12998729?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nexround",
"html_url": "https://github.com/Nexround",
"followers_url": "https://api.github.com/users/Nexround/followers",
"following_url": "https://api.github.com/users/Nexround/following{/other_user}",
"gists_url": "https://api.github.com/users/Nexround/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nexround/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nexround/subscriptions",
"organizations_url": "https://api.github.com/users/Nexround/orgs",
"repos_url": "https://api.github.com/users/Nexround/repos",
"events_url": "https://api.github.com/users/Nexround/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nexround/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [] | 2026-03-05T05:32:45 | 2026-03-05T05:32:45 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8040",
"html_url": "https://github.com/huggingface/datasets/pull/8040",
"diff_url": "https://github.com/huggingface/datasets/pull/8040.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8040.patch",
"merged_at": null
} | see #8034 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8040/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8039/comments | https://api.github.com/repos/huggingface/datasets/issues/8039/events | https://github.com/huggingface/datasets/pull/8039 | 4,025,811,123 | PR_kwDODunzps7IEv9Z | 8,039 | Fix non-deterministic by sorting metadata extensions (#8034) | {
"login": "Nexround",
"id": 12998729,
"node_id": "MDQ6VXNlcjEyOTk4NzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/12998729?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nexround",
"html_url": "https://github.com/Nexround",
"followers_url": "https://api.github.com/users/Nexround/followers",
"following_url": "https://api.github.com/users/Nexround/following{/other_user}",
"gists_url": "https://api.github.com/users/Nexround/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nexround/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nexround/subscriptions",
"organizations_url": "https://api.github.com/users/Nexround/orgs",
"repos_url": "https://api.github.com/users/Nexround/repos",
"events_url": "https://api.github.com/users/Nexround/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nexround/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [] | 2026-03-05T05:07:50 | 2026-03-05T05:07:50 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8039",
"html_url": "https://github.com/huggingface/datasets/pull/8039",
"diff_url": "https://github.com/huggingface/datasets/pull/8039.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8039.patch",
"merged_at": null
} | see #8034 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8039/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8038 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8038/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8038/comments | https://api.github.com/repos/huggingface/datasets/issues/8038/events | https://github.com/huggingface/datasets/issues/8038 | 4,020,184,168 | I_kwDODunzps7vnyRo | 8,038 | Typo in iterable_dataset.py | {
"login": "sybaik1",
"id": 28946888,
"node_id": "MDQ6VXNlcjI4OTQ2ODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/28946888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sybaik1",
"html_url": "https://github.com/sybaik1",
"followers_url": "https://api.github.com/users/sybaik1/followers",
"following_url": "https://api.github.com/users/sybaik1/following{/other_user}",
"gists_url": "https://api.github.com/users/sybaik1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sybaik1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sybaik1/subscriptions",
"organizations_url": "https://api.github.com/users/sybaik1/orgs",
"repos_url": "https://api.github.com/users/sybaik1/repos",
"events_url": "https://api.github.com/users/sybaik1/events{/privacy}",
"received_events_url": "https://api.github.com/users/sybaik1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [] | 2026-03-04T05:37:15 | 2026-03-04T05:37:15 | null | null | NONE | null | null | null | null | ### Describe the bug
`streaming=True**kwargs,` should be `streaming=True, **kwargs,`
https://github.com/huggingface/datasets/blob/81027be09d5cd9f06a6d64ef1a8a3e9ebd0f86fd/src/datasets/iterable_dataset.py#L2950
### Steps to reproduce the bug
```
from Datasets import Dataset
IterableDataset.from_csv("file.csv")
```
### Expected behavior
Load the csv file as an iterable dataset.
### Environment info
- `datasets` version: 4.6.1
- Platform: Linux-6.14.0-37-generic-x86_64-with-glibc2.39
- Python version: 3.13.11
- `huggingface_hub` version: 1.5.0
- PyArrow version: 23.0.1
- Pandas version: 3.0.1
- `fsspec` version: 2026.2.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8038/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | null | false |
https://api.github.com/repos/huggingface/datasets/issues/8037 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8037/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8037/comments | https://api.github.com/repos/huggingface/datasets/issues/8037/events | https://github.com/huggingface/datasets/issues/8037 | 4,008,928,790 | I_kwDODunzps7u82YW | 8,037 | IterableDataset.filter() chaining fails due to features being lost when is_typed=True | {
"login": "sh0416",
"id": 12251974,
"node_id": "MDQ6VXNlcjEyMjUxOTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/12251974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sh0416",
"html_url": "https://github.com/sh0416",
"followers_url": "https://api.github.com/users/sh0416/followers",
"following_url": "https://api.github.com/users/sh0416/following{/other_user}",
"gists_url": "https://api.github.com/users/sh0416/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sh0416/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sh0416/subscriptions",
"organizations_url": "https://api.github.com/users/sh0416/orgs",
"repos_url": "https://api.github.com/users/sh0416/repos",
"events_url": "https://api.github.com/users/sh0416/events{/privacy}",
"received_events_url": "https://api.github.com/users/sh0416/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [] | 2026-03-02T02:30:24 | 2026-03-02T02:30:24 | null | null | NONE | null | null | null | null | ### Describe the bug
Chaining multiple .filter() calls on an IterableDataset raises a TypeError: 'NoneType' object is not a mapping on the second .filter() call.
In `IterableDataset.filter()`, the internal `ex_iterable` is wrapped with FormattedExamplesIterable:
```python
ex_iterable = FormattedExamplesIterable(
ex_iterable,
formatting=self._formatting,
features=None if ex_iterable.is_typed else self._info.features,
token_per_repo_id=self._token_per_repo_id,
)
```
After the first .filter(), the resulting ex_iterable becomes typed (is_typed=True) because a ===MASK=== column is added. On the second .filter(), since is_typed=True, features=None is passed to FormattedExamplesIterable. This causes .features to return None, and FilteredExamplesIterable.__init__ fails when trying to unpack it.
The fix should preserve the existing features when is_typed=True, e.g.:
```python
features=ex_iterable.features if ex_iterable.is_typed else self._info.features
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("bigcode/the-stack-v2", "Scala", split="train", streaming=True)
ds = ds.filter(lambda row: row["src_encoding"] == "UTF-8")
ds = ds.filter(lambda row: row["branch_name"] == "refs/heads/master") # TypeError here
```
## Error traceback
```
File ".../datasets/iterable_dataset.py", line 2975, in filter
ex_iterable = FilteredExamplesIterable(
File ".../datasets/iterable_dataset.py", line 1606, in __init__
features = Features({**ex_iterable.features, self.mask_column_name: Value("bool")})
TypeError: 'NoneType' object is not a mapping
```
### Expected behavior
The error should not raised in this context.
### Environment info
datasets: 4.5.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8037/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | null | false |
https://api.github.com/repos/huggingface/datasets/issues/8036 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8036/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8036/comments | https://api.github.com/repos/huggingface/datasets/issues/8036/events | https://github.com/huggingface/datasets/pull/8036 | 4,006,914,206 | PR_kwDODunzps7HHMKn | 8,036 | follow `cache_dir` in download option when loading datasets | {
"login": "TsXor",
"id": 74352334,
"node_id": "MDQ6VXNlcjc0MzUyMzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74352334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TsXor",
"html_url": "https://github.com/TsXor",
"followers_url": "https://api.github.com/users/TsXor/followers",
"following_url": "https://api.github.com/users/TsXor/following{/other_user}",
"gists_url": "https://api.github.com/users/TsXor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TsXor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TsXor/subscriptions",
"organizations_url": "https://api.github.com/users/TsXor/orgs",
"repos_url": "https://api.github.com/users/TsXor/repos",
"events_url": "https://api.github.com/users/TsXor/events{/privacy}",
"received_events_url": "https://api.github.com/users/TsXor/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [] | 2026-03-01T10:56:18 | 2026-03-01T10:56:18 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8036",
"html_url": "https://github.com/huggingface/datasets/pull/8036",
"diff_url": "https://github.com/huggingface/datasets/pull/8036.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8036.patch",
"merged_at": null
} | fixes #8029 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8036/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8035 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8035/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8035/comments | https://api.github.com/repos/huggingface/datasets/issues/8035/events | https://github.com/huggingface/datasets/pull/8035 | 4,005,673,675 | PR_kwDODunzps7HDPtk | 8,035 | Update links in README to use markdown format | {
"login": "lukasb1b",
"id": 142339568,
"node_id": "U_kgDOCHvt8A",
"avatar_url": "https://avatars.githubusercontent.com/u/142339568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lukasb1b",
"html_url": "https://github.com/lukasb1b",
"followers_url": "https://api.github.com/users/lukasb1b/followers",
"following_url": "https://api.github.com/users/lukasb1b/following{/other_user}",
"gists_url": "https://api.github.com/users/lukasb1b/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lukasb1b/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lukasb1b/subscriptions",
"organizations_url": "https://api.github.com/users/lukasb1b/orgs",
"repos_url": "https://api.github.com/users/lukasb1b/repos",
"events_url": "https://api.github.com/users/lukasb1b/events{/privacy}",
"received_events_url": "https://api.github.com/users/lukasb1b/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [] | 2026-02-28T23:16:20 | 2026-02-28T23:16:20 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8035",
"html_url": "https://github.com/huggingface/datasets/pull/8035",
"diff_url": "https://github.com/huggingface/datasets/pull/8035.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8035.patch",
"merged_at": null
} | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8035/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8035/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8034 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8034/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8034/comments | https://api.github.com/repos/huggingface/datasets/issues/8034/events | https://github.com/huggingface/datasets/issues/8034 | 4,004,188,523 | I_kwDODunzps7uqxFr | 8,034 | [BUG] load_datasets() cannot use the generated Arrow cache correctly | {
"login": "Nexround",
"id": 12998729,
"node_id": "MDQ6VXNlcjEyOTk4NzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/12998729?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nexround",
"html_url": "https://github.com/Nexround",
"followers_url": "https://api.github.com/users/Nexround/followers",
"following_url": "https://api.github.com/users/Nexround/following{/other_user}",
"gists_url": "https://api.github.com/users/Nexround/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nexround/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nexround/subscriptions",
"organizations_url": "https://api.github.com/users/Nexround/orgs",
"repos_url": "https://api.github.com/users/Nexround/repos",
"events_url": "https://api.github.com/users/Nexround/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nexround/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [
"PYTHONHASHSEED=1 uv run python -c\n \"\nfrom datasets.load import LocalDatasetModuleFactory\nmod = LocalDatasetModuleFactory('/cache/datasets/imagenet-1k')\ndm = mod.get_module()\nprint('Hash:', dm.hash)\n\"\nResolving data files: 100%|█████████| 294/294 [00:00<00:00, 20584.68it/s]\nResolving data files: 100%|████... | 2026-02-28T08:11:23 | 2026-03-04T13:50:08 | null | null | NONE | null | null | null | null | ### Describe the bug
The datasets library cannot use the generated Arrow cache correctly, seemingly due to a flaw in the internal hash symbol calculation logic.
The following code provides verification.
I am trying to locate the specific code position, and if there are further developments, I will update here.
### Steps to reproduce the bug
python -c "
from datasets.load import LocalDatasetModuleFactory
mod = LocalDatasetModuleFactory('/cache/datasets/imagenet-1k')
dm = mod.get_module()
print('Hash:', dm.hash)
"
Resolving data files: 100%|█████████| 294/294 [00:00<00:00, 23895.46it/s]
Resolving data files: 100%|██████████| 28/28 [00:00<00:00, 221168.57it/s]
Hash: 9e9925e0f7d48775
python -c "
from datasets.load import LocalDatasetModuleFactory
mod = LocalDatasetModuleFactory('/cache/datasets/imagenet-1k')
dm = mod.get_module()
print('Hash:', dm.hash)
"
Resolving data files: 100%|█████████| 294/294 [00:00<00:00, 26252.91it/s]
Resolving data files: 100%|██████████| 28/28 [00:00<00:00, 188205.95it/s]
Hash: 9af23f3d5d488660
### Expected behavior
The expected behavior is that the above code should give the same hash value.
### Environment info
- `datasets` version: 4.6.1
- Platform: Linux-6.8.0-100-generic-x86_64-with-glibc2.39
- Python version: 3.12.12
- `huggingface_hub` version: 1.5.0
- PyArrow version: 23.0.1
- Pandas version: 3.0.1
- `fsspec` version: 2026.2.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8034/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | null | false |
https://api.github.com/repos/huggingface/datasets/issues/8033 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8033/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8033/comments | https://api.github.com/repos/huggingface/datasets/issues/8033/events | https://github.com/huggingface/datasets/pull/8033 | 4,003,917,950 | PR_kwDODunzps7G-QP1 | 8,033 | perf: use deque for async map task/index queues | {
"login": "giulio-leone",
"id": 6887247,
"node_id": "MDQ6VXNlcjY4ODcyNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6887247?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/giulio-leone",
"html_url": "https://github.com/giulio-leone",
"followers_url": "https://api.github.com/users/giulio-leone/followers",
"following_url": "https://api.github.com/users/giulio-leone/following{/other_user}",
"gists_url": "https://api.github.com/users/giulio-leone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/giulio-leone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/giulio-leone/subscriptions",
"organizations_url": "https://api.github.com/users/giulio-leone/orgs",
"repos_url": "https://api.github.com/users/giulio-leone/repos",
"events_url": "https://api.github.com/users/giulio-leone/events{/privacy}",
"received_events_url": "https://api.github.com/users/giulio-leone/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_8033). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Friendly ping — CI is green and this is ready for review. Happy to address any feedback... | 2026-02-28T05:07:00 | 2026-02-28T21:32:44 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8033",
"html_url": "https://github.com/huggingface/datasets/pull/8033",
"diff_url": "https://github.com/huggingface/datasets/pull/8033.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8033.patch",
"merged_at": null
} | ## Problem
`arrow_dataset.py` and `iterable_dataset.py` drain `indices` and `tasks` lists front-to-back via `.pop(0)` during parallel async map operations. With `MAX_NUM_RUNNING_ASYNC_MAP_FUNCTIONS_IN_PARALLEL` items in flight, each `.pop(0)` is **O(n)** making the drain loop **O(n²)**.
## Solution
Switch both `indices` and `tasks` to `collections.deque` with `.popleft()` for **O(1)** front removal.
## Changes
- `src/datasets/arrow_dataset.py`: Import `deque`, type `tasks` and `indices` as deque, use `.popleft()`
- `src/datasets/iterable_dataset.py`: Import `deque`, type `tasks`, `indices`, and `_owned_loops_and_tasks` entry as deque, use `.popleft()`
## Testing
- Syntax verified via `ast.parse()` on both files | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8033/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8032 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8032/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8032/comments | https://api.github.com/repos/huggingface/datasets/issues/8032/events | https://github.com/huggingface/datasets/pull/8032 | 4,003,285,460 | PR_kwDODunzps7G8PQ8 | 8,032 | set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_8032). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-27T23:28:21 | 2026-02-27T23:31:20 | 2026-02-27T23:28:27 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8032",
"html_url": "https://github.com/huggingface/datasets/pull/8032",
"diff_url": "https://github.com/huggingface/datasets/pull/8032.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8032.patch",
"merged_at": "2026-02-27T23:28:27"
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8032/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8031 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8031/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8031/comments | https://api.github.com/repos/huggingface/datasets/issues/8031/events | https://github.com/huggingface/datasets/pull/8031 | 4,003,278,631 | PR_kwDODunzps7G8N4w | 8,031 | release: 4.6.1 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_8031). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-27T23:25:42 | 2026-02-27T23:29:21 | 2026-02-27T23:26:09 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8031",
"html_url": "https://github.com/huggingface/datasets/pull/8031",
"diff_url": "https://github.com/huggingface/datasets/pull/8031.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8031.patch",
"merged_at": "2026-02-27T23:26:09"
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8031/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8030 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8030/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8030/comments | https://api.github.com/repos/huggingface/datasets/issues/8030/events | https://github.com/huggingface/datasets/pull/8030 | 4,003,238,887 | PR_kwDODunzps7G8F4x | 8,030 | Remove tmp file in push to hub | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_8030). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-27T23:07:52 | 2026-03-02T09:23:47 | 2026-02-27T23:21:02 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8030",
"html_url": "https://github.com/huggingface/datasets/pull/8030",
"diff_url": "https://github.com/huggingface/datasets/pull/8030.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8030.patch",
"merged_at": "2026-02-27T23:21:02"
} | reported in https://github.com/huggingface/datasets/pull/7979#issuecomment-3973374484 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8030/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8029 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8029/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8029/comments | https://api.github.com/repos/huggingface/datasets/issues/8029/events | https://github.com/huggingface/datasets/issues/8029 | 4,001,742,455 | I_kwDODunzps7uhb53 | 8,029 | `cache_dir` option in `download_config` in `load_dataset` is not respected | {
"login": "TsXor",
"id": 74352334,
"node_id": "MDQ6VXNlcjc0MzUyMzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74352334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TsXor",
"html_url": "https://github.com/TsXor",
"followers_url": "https://api.github.com/users/TsXor/followers",
"following_url": "https://api.github.com/users/TsXor/following{/other_user}",
"gists_url": "https://api.github.com/users/TsXor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TsXor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TsXor/subscriptions",
"organizations_url": "https://api.github.com/users/TsXor/orgs",
"repos_url": "https://api.github.com/users/TsXor/repos",
"events_url": "https://api.github.com/users/TsXor/events{/privacy}",
"received_events_url": "https://api.github.com/users/TsXor/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [
"Actually, I used debugger to step into the code of `datasets`, and found `load_dataset` calls into `cached_path`, where it calls into `hf_hub_download` method for `hf://` links:\nhttps://github.com/huggingface/datasets/blob/6e8eaeed4d202b72eff44ba1a9ec5b7d81d2d3e6/src/datasets/utils/file_utils.py#L182-L195\n\nSome... | 2026-02-27T16:05:30 | 2026-02-27T23:34:28 | null | null | NONE | null | null | null | null | ### Describe the bug
Downloaded files still go to `~/.cache/huggingface/hub/` even if I specified `cache_dir` option in `download_config` in `load_dataset`.
### Steps to reproduce the bug
Run my freshly written script and found that downloaded files did not go where I want.
```python
'''
下载OpenWebText数据集,允许使用代理
'''
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser(description='Download TikToken Files')
parser.add_argument('--output-path', required=True, metavar='PATH', help='输出目录')
parser.add_argument('--mirror', required=False, metavar='URL', help='HF镜像网址,例如:https://hf-mirror.com')
parser.add_argument('--proxy', required=False, metavar='URL', help='代理网址')
args = parser.parse_args()
else: args = None
import os
import shutil
from pathlib import Path
from typing import cast
if __name__ == '__main__':
assert args is not None
output_path = Path(args.output_path).resolve()
proxy_url = None if args.proxy is None else str(args.proxy)
mirror_url = None if args.mirror is None else str(args.mirror)
output_path.mkdir(parents=True, exist_ok=True)
download_cache_dir = output_path / 'download_cache'
read_cache_dir = output_path / 'read_cache'
save_dir = output_path / 'saved'
complete_mark = output_path / 'completed'
def clear_cache():
shutil.rmtree(download_cache_dir)
shutil.rmtree(read_cache_dir)
def download_and_save():
if mirror_url is not None:
os.environ["HF_ENDPOINT"] = mirror_url
from datasets import DownloadConfig, load_dataset
if proxy_url is not None: proxy_dict = { "http": proxy_url, "https": proxy_url }
else: proxy_dict = None
dataset = load_dataset(
'openwebtext',
cache_dir=str(read_cache_dir),
download_config=DownloadConfig(cache_dir=download_cache_dir, proxies=proxy_dict)
)
dataset.save_to_disk(save_dir)
if complete_mark.is_file():
print('OpenWebText is already downloaded')
clear_cache()
else:
download_and_save()
complete_mark.touch(exist_ok=True)
clear_cache()
```
### Expected behavior
Downloaded files goes to where I specified in `download_config`.
### Environment info
```
> uv run datasets-cli env
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 4.6.0
- Platform: Windows-11-10.0.26200-SP0
- Python version: 3.14.3
- `huggingface_hub` version: 1.5.0
- PyArrow version: 23.0.1
- Pandas version: 3.0.1
- `fsspec` version: 2026.2.0
``` | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8029/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | null | false |
https://api.github.com/repos/huggingface/datasets/issues/8028 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8028/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8028/comments | https://api.github.com/repos/huggingface/datasets/issues/8028/events | https://github.com/huggingface/datasets/pull/8028 | 4,000,313,779 | PR_kwDODunzps7GybfA | 8,028 | Fix torchcodec audio decoding to respect 'num_channels' | {
"login": "AsymptotaX",
"id": 2968413,
"node_id": "MDQ6VXNlcjI5Njg0MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2968413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AsymptotaX",
"html_url": "https://github.com/AsymptotaX",
"followers_url": "https://api.github.com/users/AsymptotaX/followers",
"following_url": "https://api.github.com/users/AsymptotaX/following{/other_user}",
"gists_url": "https://api.github.com/users/AsymptotaX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AsymptotaX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AsymptotaX/subscriptions",
"organizations_url": "https://api.github.com/users/AsymptotaX/orgs",
"repos_url": "https://api.github.com/users/AsymptotaX/repos",
"events_url": "https://api.github.com/users/AsymptotaX/events{/privacy}",
"received_events_url": "https://api.github.com/users/AsymptotaX/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_8028). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-27T10:28:46 | 2026-02-28T10:46:48 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8028",
"html_url": "https://github.com/huggingface/datasets/pull/8028",
"diff_url": "https://github.com/huggingface/datasets/pull/8028.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8028.patch",
"merged_at": null
} | Fixes torchcodec audio decoding when `num_channels` is set on `Audio`.
Before this change, `AudioDecoder["array"]` reduced multi-channel audio to mono by averaging channels, so the requested channel behavior was not respected.
With this PR:
- multi-channel decoded arrays are preserved by default;
- mono output is returned only when `num_channels == 1` is explicitly requested.
#### Previous behavior
`ds_stereo = ds.cast_column("audio", Audio(num_channels={...}))` - None, 1, 2
Original file: '(16000, 2)' - stereo ✓
'Audio(num_channels=None)': '(16000,)' - mono ✗
'Audio(num_channels=2)': '(16000,)' - mono ✗
'Audio(num_channels=1)': '(16000,)' - mono ✓
#### New behavior
'num_channels=None' preserves the original number of channels from the source file.
'num_channels=2' preserves/converts to stereo output with shape '(2, num_samples)'.
'num_channels=1' downmixes to mono with shape '(num_samples,)'.
#### Results
**Original file shape** (via soundfile): (16000, 2)
HF datasets shape with num_channels=None: (2, 16000)
HF datasets shape with num_channels=1: (16000,)
HF datasets shape with num_channels=2: (2, 16000)
Fixes #8005. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8028/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8027 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8027/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8027/comments | https://api.github.com/repos/huggingface/datasets/issues/8027/events | https://github.com/huggingface/datasets/pull/8027 | 3,997,153,282 | PR_kwDODunzps7GoGjP | 8,027 | Add `Json()` type | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_8027). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-26T18:38:45 | 2026-02-28T15:34:54 | null | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8027",
"html_url": "https://github.com/huggingface/datasets/pull/8027",
"diff_url": "https://github.com/huggingface/datasets/pull/8027.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8027.patch",
"merged_at": null
} | `Json()` type is needed when some fields don't have fixed types, e.g. "tools" and "tool_calls" types in conversation + tool calling datasets like [dataclaw](https://huggingface.co/datasets?other=dataclaw) datasets. Cc @peteromallet and @woctordho @Nanbeige for viz
Examples of supported tool-calling / dataclaw datasets:
```python
ds = load_dataset("peteromallet/dataclaw-peteromallet") # happens to not need Json() since tool types are fixed
ds = load_dataset("woctordho/dataclaw")
ds = load_dataset("Nanbeige/ToolMind")
```
The `Json()` type is auto-applied when loading JSON files when mixed types are found. The `Json()`type is set to the list containing objects with mixed types, to end up with a `List(Json())` type for "tools" and "tool_calls" columns.
It is also possible to define a Dataset with a `Json()` type like this:
```python
>>> from datasets import load_dataset, Dataset, Features, Json, List
>>> example = {"a": [{"key": 0}, {"another-key": "another-type"}]}
>>> features = Features({"a": List(Json())})
>>> ds = Dataset.from_list([example], features=features)
>>> ds[0]
{'a': [{'key': 0}, {'another-key': 'another-type'}]}
```
close https://github.com/huggingface/datasets/issues/7869
close https://github.com/huggingface/datasets/issues/4120
close https://github.com/huggingface/datasets/issues/5827
related to https://github.com/huggingface/datasets/issues/7092
related to https://github.com/huggingface/datasets/issues/6162
related to https://huggingface.co/datasets/PatoFlamejanteTV/LocalLLaMA/discussions/1 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8027/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8027/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8026 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8026/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8026/comments | https://api.github.com/repos/huggingface/datasets/issues/8026/events | https://github.com/huggingface/datasets/pull/8026 | 3,989,139,119 | PR_kwDODunzps7GNnFC | 8,026 | set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_8026). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-25T12:18:15 | 2026-02-25T12:35:43 | 2026-02-25T12:18:34 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8026",
"html_url": "https://github.com/huggingface/datasets/pull/8026",
"diff_url": "https://github.com/huggingface/datasets/pull/8026.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8026.patch",
"merged_at": "2026-02-25T12:18:34"
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8026/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8025 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8025/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8025/comments | https://api.github.com/repos/huggingface/datasets/issues/8025/events | https://github.com/huggingface/datasets/pull/8025 | 3,989,040,394 | PR_kwDODunzps7GNSUF | 8,025 | release: 4.6.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_8025). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-25T11:57:08 | 2026-02-25T11:59:59 | 2026-02-25T11:58:17 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8025",
"html_url": "https://github.com/huggingface/datasets/pull/8025",
"diff_url": "https://github.com/huggingface/datasets/pull/8025.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8025.patch",
"merged_at": "2026-02-25T11:58:17"
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8025/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8024 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8024/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8024/comments | https://api.github.com/repos/huggingface/datasets/issues/8024/events | https://github.com/huggingface/datasets/pull/8024 | 3,983,877,502 | PR_kwDODunzps7F8QeB | 8,024 | Allow import polars in map() | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_8024). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-24T13:57:57 | 2026-02-24T14:08:17 | 2026-02-24T14:08:15 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8024",
"html_url": "https://github.com/huggingface/datasets/pull/8024",
"diff_url": "https://github.com/huggingface/datasets/pull/8024.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8024.patch",
"merged_at": "2026-02-24T14:08:15"
} | close https://github.com/huggingface/datasets/issues/7988 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8024/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8023 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8023/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8023/comments | https://api.github.com/repos/huggingface/datasets/issues/8023/events | https://github.com/huggingface/datasets/pull/8023 | 3,983,829,205 | PR_kwDODunzps7F8GUT | 8,023 | Support empty shard in from_generator | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_8023). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-24T13:47:07 | 2026-02-24T13:58:13 | 2026-02-24T13:58:11 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8023",
"html_url": "https://github.com/huggingface/datasets/pull/8023",
"diff_url": "https://github.com/huggingface/datasets/pull/8023.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8023.patch",
"merged_at": "2026-02-24T13:58:11"
} | close https://github.com/huggingface/datasets/issues/8006 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8023/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8022/comments | https://api.github.com/repos/huggingface/datasets/issues/8022/events | https://github.com/huggingface/datasets/pull/8022 | 3,983,057,516 | PR_kwDODunzps7F5eYs | 8,022 | fix: resolve base_path conflict between builder_kwargs and config_kwargs | {
"login": "Ryan-J-MAX",
"id": 79621677,
"node_id": "MDQ6VXNlcjc5NjIxNjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/79621677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ryan-J-MAX",
"html_url": "https://github.com/Ryan-J-MAX",
"followers_url": "https://api.github.com/users/Ryan-J-MAX/followers",
"following_url": "https://api.github.com/users/Ryan-J-MAX/following{/other_user}",
"gists_url": "https://api.github.com/users/Ryan-J-MAX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ryan-J-MAX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ryan-J-MAX/subscriptions",
"organizations_url": "https://api.github.com/users/Ryan-J-MAX/orgs",
"repos_url": "https://api.github.com/users/Ryan-J-MAX/repos",
"events_url": "https://api.github.com/users/Ryan-J-MAX/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ryan-J-MAX/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | [] | null | [] | 2026-02-24T11:13:06 | 2026-02-24T14:01:23 | 2026-02-24T14:01:23 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8022",
"html_url": "https://github.com/huggingface/datasets/pull/8022",
"diff_url": "https://github.com/huggingface/datasets/pull/8022.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8022.patch",
"merged_at": null
} | ## What does this PR fix?
This PR fixes a bug where passing `base_path` to `load_dataset_builder()` or `load_dataset()` causes a TypeError because both `builder_kwargs` and `config_kwargs` can contain the same key.
## Problem
When users call:
```python
from datasets import load_dataset
load_dataset("rotten_tomatoes", base_path="./sample_data")
```
They get:
```
TypeError: type object got multiple values for keyword argument "base_path"
```
This happens because `base_path` can exist in both `builder_kwargs` (from the dataset module) and `config_kwargs` (from user input), causing a conflict when both are unpacked.
## Solution
- Pop `base_path` from `builder_kwargs` before merging with `config_kwargs`
- Explicitly pass `base_path` to the builder constructor
## Testing
The fix should resolve the issue demonstrated in #4910:
```python
from datasets import load_dataset
load_dataset("rotten_tomatoes", base_path="./sample_data") # Now works!
```
Fixes #4910 | {
"login": "Ryan-J-MAX",
"id": 79621677,
"node_id": "MDQ6VXNlcjc5NjIxNjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/79621677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ryan-J-MAX",
"html_url": "https://github.com/Ryan-J-MAX",
"followers_url": "https://api.github.com/users/Ryan-J-MAX/followers",
"following_url": "https://api.github.com/users/Ryan-J-MAX/following{/other_user}",
"gists_url": "https://api.github.com/users/Ryan-J-MAX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ryan-J-MAX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ryan-J-MAX/subscriptions",
"organizations_url": "https://api.github.com/users/Ryan-J-MAX/orgs",
"repos_url": "https://api.github.com/users/Ryan-J-MAX/repos",
"events_url": "https://api.github.com/users/Ryan-J-MAX/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ryan-J-MAX/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8022/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8021 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8021/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8021/comments | https://api.github.com/repos/huggingface/datasets/issues/8021/events | https://github.com/huggingface/datasets/pull/8021 | 3,981,426,772 | PR_kwDODunzps7F0FmV | 8,021 | progress bar optional manual control | {
"login": "AnkitAhlawat7742",
"id": 199906670,
"node_id": "U_kgDOC-pVbg",
"avatar_url": "https://avatars.githubusercontent.com/u/199906670?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AnkitAhlawat7742",
"html_url": "https://github.com/AnkitAhlawat7742",
"followers_url": "https://api.github.com/users/AnkitAhlawat7742/followers",
"following_url": "https://api.github.com/users/AnkitAhlawat7742/following{/other_user}",
"gists_url": "https://api.github.com/users/AnkitAhlawat7742/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AnkitAhlawat7742/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AnkitAhlawat7742/subscriptions",
"organizations_url": "https://api.github.com/users/AnkitAhlawat7742/orgs",
"repos_url": "https://api.github.com/users/AnkitAhlawat7742/repos",
"events_url": "https://api.github.com/users/AnkitAhlawat7742/events{/privacy}",
"received_events_url": "https://api.github.com/users/AnkitAhlawat7742/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [
"Hi @lhoestq ,\r\nCan you please trigger the CI CD pipeline ?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_8021). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@lhoestq I have appli... | 2026-02-24T04:22:17 | 2026-03-02T05:08:37 | null | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8021",
"html_url": "https://github.com/huggingface/datasets/pull/8021",
"diff_url": "https://github.com/huggingface/datasets/pull/8021.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8021.patch",
"merged_at": null
} | Fix: https://github.com/huggingface/datasets/issues/7939
Summary
The default behavior for the progress bar has been adjusted based on dataset size. It remains enabled for small datasets and is automatically disabled for larger datasets to improve performance. Users retain full control and can explicitly enable or disable the progress bar using the progress_bar parameter.
Changes
Modified the logic to disable the progress bar when the number of files exceeds 16. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8021/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8020/comments | https://api.github.com/repos/huggingface/datasets/issues/8020/events | https://github.com/huggingface/datasets/pull/8020 | 3,977,114,662 | PR_kwDODunzps7Fl8mj | 8,020 | feat: add return_file_name support to Parquet packaged builder | {
"login": "dhruvildarji",
"id": 25696982,
"node_id": "MDQ6VXNlcjI1Njk2OTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/25696982?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhruvildarji",
"html_url": "https://github.com/dhruvildarji",
"followers_url": "https://api.github.com/users/dhruvildarji/followers",
"following_url": "https://api.github.com/users/dhruvildarji/following{/other_user}",
"gists_url": "https://api.github.com/users/dhruvildarji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhruvildarji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhruvildarji/subscriptions",
"organizations_url": "https://api.github.com/users/dhruvildarji/orgs",
"repos_url": "https://api.github.com/users/dhruvildarji/repos",
"events_url": "https://api.github.com/users/dhruvildarji/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhruvildarji/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [] | 2026-02-23T09:13:54 | 2026-02-23T09:13:54 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8020",
"html_url": "https://github.com/huggingface/datasets/pull/8020",
"diff_url": "https://github.com/huggingface/datasets/pull/8020.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8020.patch",
"merged_at": null
} | Part of #5806. Extends the `return_file_name` feature (implemented for JSON in #7948 and CSV in #8019) to the Parquet packaged builder. When `return_file_name=True`, a `file_name` column containing the source file basename is appended to each batch. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8020/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8019 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8019/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8019/comments | https://api.github.com/repos/huggingface/datasets/issues/8019/events | https://github.com/huggingface/datasets/pull/8019 | 3,977,073,546 | PR_kwDODunzps7Flz15 | 8,019 | feat: add return_file_name support to CSV packaged builder | {
"login": "dhruvildarji",
"id": 25696982,
"node_id": "MDQ6VXNlcjI1Njk2OTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/25696982?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dhruvildarji",
"html_url": "https://github.com/dhruvildarji",
"followers_url": "https://api.github.com/users/dhruvildarji/followers",
"following_url": "https://api.github.com/users/dhruvildarji/following{/other_user}",
"gists_url": "https://api.github.com/users/dhruvildarji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dhruvildarji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhruvildarji/subscriptions",
"organizations_url": "https://api.github.com/users/dhruvildarji/orgs",
"repos_url": "https://api.github.com/users/dhruvildarji/repos",
"events_url": "https://api.github.com/users/dhruvildarji/events{/privacy}",
"received_events_url": "https://api.github.com/users/dhruvildarji/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [] | 2026-02-23T09:02:31 | 2026-02-23T09:02:31 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8019",
"html_url": "https://github.com/huggingface/datasets/pull/8019",
"diff_url": "https://github.com/huggingface/datasets/pull/8019.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8019.patch",
"merged_at": null
} | ## Summary
- Adds an optional `return_file_name: bool = False` parameter to `CsvConfig`
- When `return_file_name=True`, a `file_name` column is appended to every batch in `_generate_tables`, containing the basename of the source CSV file for each row
- Default is `False`, preserving full backward compatibility
## Motivation
Part of #5806. Extends the `file_name` feature (already implemented for the JSON packaged builder in #7948) to the CSV packaged builder. This enables use cases such as resuming training from checkpoints by identifying which data shards have already been consumed.
## Changes
- `src/datasets/packaged_modules/csv/csv.py`: Add `return_file_name: bool = False` field to `CsvConfig`; in `_generate_tables`, append a `file_name` column when the flag is `True`
- `tests/packaged_modules/test_csv.py`: Add three tests covering default behavior (no column), enabled behavior (column present), and correct column values
## Test plan
- [ ] `test_csv_no_file_name_by_default` — verifies `file_name` column is absent by default
- [ ] `test_csv_return_file_name_enabled` — verifies `file_name` column is present when `return_file_name=True`
- [ ] `test_csv_file_name_values` — verifies each row's `file_name` value equals the source file basename
All three tests pass locally.
🤖 Generated with [Claude Code](https://claude.com/claude-code) | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8019/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8017 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8017/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8017/comments | https://api.github.com/repos/huggingface/datasets/issues/8017/events | https://github.com/huggingface/datasets/pull/8017 | 3,975,274,092 | PR_kwDODunzps7FgERR | 8,017 | Improve error message for deprecated dataset scripts with migration guidance | {
"login": "suryanshbt211",
"id": 218385148,
"node_id": "U_kgDODQRK_A",
"avatar_url": "https://avatars.githubusercontent.com/u/218385148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suryanshbt211",
"html_url": "https://github.com/suryanshbt211",
"followers_url": "https://api.github.com/users/suryanshbt211/followers",
"following_url": "https://api.github.com/users/suryanshbt211/following{/other_user}",
"gists_url": "https://api.github.com/users/suryanshbt211/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suryanshbt211/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suryanshbt211/subscriptions",
"organizations_url": "https://api.github.com/users/suryanshbt211/orgs",
"repos_url": "https://api.github.com/users/suryanshbt211/repos",
"events_url": "https://api.github.com/users/suryanshbt211/events{/privacy}",
"received_events_url": "https://api.github.com/users/suryanshbt211/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [
"Hi team! I’m just checking in on this PR to see if there is any additional information or documentation I can provide to help with the review process. Thanks for your time!"
] | 2026-02-22T19:31:34 | 2026-02-28T14:24:11 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8017",
"html_url": "https://github.com/huggingface/datasets/pull/8017",
"diff_url": "https://github.com/huggingface/datasets/pull/8017.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8017.patch",
"merged_at": null
} | Summary
This PR improves the RuntimeError message raised when deprecated local dataset scripts are detected.
Previously, the error message:
"Dataset scripts are no longer supported"
did not provide actionable migration guidance.
Improvements
- Clarifies the new architecture
- Provides explicit migration steps
- Includes link to official documentation
- Adds a focused unit test to validate error clarity
This should reduce confusion for contributors migrating legacy dataset scripts. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8017/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8016/comments | https://api.github.com/repos/huggingface/datasets/issues/8016/events | https://github.com/huggingface/datasets/pull/8016 | 3,974,759,572 | PR_kwDODunzps7FecuC | 8,016 | Remove legacy Sphinx roles from docstrings | {
"login": "Pchambet",
"id": 119671552,
"node_id": "U_kgDOByILAA",
"avatar_url": "https://avatars.githubusercontent.com/u/119671552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pchambet",
"html_url": "https://github.com/Pchambet",
"followers_url": "https://api.github.com/users/Pchambet/followers",
"following_url": "https://api.github.com/users/Pchambet/following{/other_user}",
"gists_url": "https://api.github.com/users/Pchambet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pchambet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pchambet/subscriptions",
"organizations_url": "https://api.github.com/users/Pchambet/orgs",
"repos_url": "https://api.github.com/users/Pchambet/repos",
"events_url": "https://api.github.com/users/Pchambet/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pchambet/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [] | 2026-02-22T15:06:33 | 2026-02-23T16:10:21 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8016",
"html_url": "https://github.com/huggingface/datasets/pull/8016",
"diff_url": "https://github.com/huggingface/datasets/pull/8016.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8016.patch",
"merged_at": null
} | ## Summary
Replace all remaining `:obj:`, `:class:`, `:func:`, and `:meth:` Sphinx cross-reference roles with plain backtick references, aligning with the project's current documentation conventions.
**14 files, 71 replacements. No logic changes.**
## Context
As noted in #5324, the codebase had a mix of old Sphinx syntax and the modern backtick-only format used by HuggingFace's doc builder. @stevhliu [mentioned](https://github.com/huggingface/datasets/issues/5324#issuecomment-1525157369) that the user-facing APIs had been cleaned up, but old syntax was still lingering in non-public APIs. This PR catches the remaining occurrences.
## What changed
| Old syntax | New syntax | Count |
|---|---|---|
| `:obj:\`bool\`` | `` `bool` `` | 18 |
| `:class:\`DatasetInfo\`` | `` `DatasetInfo` `` | 15 |
| `:func:\`fsspec.open\`` | `` `fsspec.open` `` | 9 |
| `:meth:\`...\`` | `` `...` `` | 2 |
| `:obj:\`~pathlib.Path\`` | `` `Path` `` | 8 |
| `:class:\`~download.DownloadConfig\`` | `` `DownloadConfig` `` | 3 |
| *other* | — | 16 |
Tilde references (`~module.Class`) are resolved to their short name (`Class`), matching the existing convention in the codebase.
## Files touched
```
src/datasets/arrow_dataset.py
src/datasets/arrow_writer.py
src/datasets/dataset_dict.py
src/datasets/features/features.py
src/datasets/filesystems/compression.py
src/datasets/fingerprint.py
src/datasets/formatting/formatting.py
src/datasets/inspect.py
src/datasets/iterable_dataset.py
src/datasets/load.py
src/datasets/streaming.py
src/datasets/table.py
src/datasets/utils/deprecation_utils.py
src/datasets/utils/file_utils.py
```
Relates to #5324 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8016/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8015 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8015/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8015/comments | https://api.github.com/repos/huggingface/datasets/issues/8015/events | https://github.com/huggingface/datasets/issues/8015 | 3,974,281,900 | I_kwDODunzps7s4rqs | 8,015 | Iterating a streaming dataset correctly in random order | {
"login": "unrealwill",
"id": 11304248,
"node_id": "MDQ6VXNlcjExMzA0MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/11304248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/unrealwill",
"html_url": "https://github.com/unrealwill",
"followers_url": "https://api.github.com/users/unrealwill/followers",
"following_url": "https://api.github.com/users/unrealwill/following{/other_user}",
"gists_url": "https://api.github.com/users/unrealwill/gists{/gist_id}",
"starred_url": "https://api.github.com/users/unrealwill/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unrealwill/subscriptions",
"organizations_url": "https://api.github.com/users/unrealwill/orgs",
"repos_url": "https://api.github.com/users/unrealwill/repos",
"events_url": "https://api.github.com/users/unrealwill/events{/privacy}",
"received_events_url": "https://api.github.com/users/unrealwill/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [
"Hi, reshuffling is actually pretty easy:\n\n```python\n\"\"\" shuffle.py \"\"\"\nfrom datasets import load_dataset\n\nnum_proc = 8 # tweak it here depending on your machine\nseed = 42\nds = load_dataset(\"jackyhate/text-to-image-2M\", num_proc=num_proc)\nds = ds.shuffle(seed=seed)\nds.push_to_hub(\"my-username/te... | 2026-02-22T11:34:06 | 2026-02-26T18:09:27 | null | null | NONE | null | null | null | null | ### Describe the bug
For training to work properly it is considered good practice in machine learning to sample the dataset randomly (uniformly). I am not sure how the user is expected to do it correctly when the creator of the dataset did a bad job.
https://huggingface.co/docs/hub/datasets-webdataset
```
**Shuffle**
Generally, datasets in WebDataset formats are already shuffled and ready to feed to a DataLoader. But you can still reshuffle the data with WebDataset’s approximate shuffling.
In addition to shuffling the list of shards, WebDataset uses a buffer to shuffle a dataset without any cost to speed:
```
Let me expose the general problem with this specific dataset.
https://huggingface.co/datasets/jackyhate/text-to-image-2M
I'll only consider the single node, single worker case to highlight the problem, which is somewhat hidden and attenuated when training using multiple machines.
Here is the recommended way to load it on the main page
```
# copy pasted from https://huggingface.co/datasets/jackyhate/text-to-image-2M
num_shards = 46 # Number of webdataset tar files
urls = [base_url.format(i=i) for i in range(num_shards)]
dataset = load_dataset("webdataset", data_files={"train": urls}, split="train", streaming=True)
# Example of iterating through the dataset
for image in dataset:
print(image) # single image in row with associated columns
break
```
The dataset is composed of 46 shards from multiple sources : Some shards have data generated with flux, and other have data generated from dall-e. There are usually 50000 images per file except for data_000034.tar and data_000046.tar which have less files because they are the end files of some version of the dataset.
Initially I thought you were using the [webdataset](https://github.com/webdataset/webdataset) library which has at least tried to think about the problem of shuffling the dataset (although poorly documented and with bad default behavior : See https://github.com/webdataset/webdataset/blob/e0953f9bba17b416d5792d5a263b171c266e78be/src/webdataset/mix.py )
But it seems that you are just iterating the shards in order and then iterating the examples of each shard in order.
https://github.com/huggingface/datasets/blob/fe7353a478c1bd25873fad210ac4bdd4bb0c63cc/src/datasets/packaged_modules/webdataset/webdataset.py#L111-L130
If I understand correctly the recommended way to shuffle the data as described in https://huggingface.co/docs/datasets/loading
`ds = ds.shuffle(seed=42, buffer_size=10_000) # shuffles the shards order + uses a shuffle buffer`
or shuffle with a buffer in the dataloader.
This is problematic because the dataset **has not been initially shuffled** by the creator of the dataset, which means that samples are highly correlated, for examples samples coming from the flux shard have a lot more details than those coming from the dall-e shard, and they are not mixed in the shuffle buffer, which result in the loss function having a periodic pattern by epoch typical of bad shuffling behavior.
Of course we could just reshuffle the datasets beforehand but these are huge and we would need to do it for every versions of the dataset.
The random mixing and round robin mixing which can be used in the webdataset library, can also be problematic when the shards are of various sizes : When we stopped at the expiration of the first shard, we never visit all the examples from the later portions of other shards. Also if some shards provenance dataset are not uniformly balanced, we have some period of time where the shard is not sampled, for example when the smaller shard has been exhausted, samples from its class won't be seen until the next epoch.
The proper way being sampling from shards so that they are all exhausted roughly at the same time, aka using a smaller sampling probability for smaller shards, but that necessitate to know the sizes of the shard beforehand, and if you also do per shard shuffling like webdataset library do, it needs to open all shards simultaneously and therefore require a lot of memory to load nshards*buffer_size.
Hopefully I have been clear enough, and the problem which occurs with webdataset and probably also occurs with every other streamable format that you handle can be made to work in a standardized, robust and bug-free way, as the types of bugs these generate are usually of the silent kind where training works better but it generalizes less well because the training used some correlation from the sequence of the dataset.
### Steps to reproduce the bug
Observe periodic behavior of loss function when training a simple neural network on the dataset
### Expected behavior
A behavior somewhat similar to a standard uniform shuffle
### Environment info
not relevant | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8015/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | null | false |
https://api.github.com/repos/huggingface/datasets/issues/8014 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8014/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8014/comments | https://api.github.com/repos/huggingface/datasets/issues/8014/events | https://github.com/huggingface/datasets/pull/8014 | 3,972,099,737 | PR_kwDODunzps7FV93G | 8,014 | Speed up local 'get_data_patterns' by avoiding repeated recursive scans | {
"login": "AsymptotaX",
"id": 2968413,
"node_id": "MDQ6VXNlcjI5Njg0MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2968413?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AsymptotaX",
"html_url": "https://github.com/AsymptotaX",
"followers_url": "https://api.github.com/users/AsymptotaX/followers",
"following_url": "https://api.github.com/users/AsymptotaX/following{/other_user}",
"gists_url": "https://api.github.com/users/AsymptotaX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AsymptotaX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AsymptotaX/subscriptions",
"organizations_url": "https://api.github.com/users/AsymptotaX/orgs",
"repos_url": "https://api.github.com/users/AsymptotaX/repos",
"events_url": "https://api.github.com/users/AsymptotaX/events{/privacy}",
"received_events_url": "https://api.github.com/users/AsymptotaX/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_8014). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> lgtm ! can you run `make style` to fix the CI before we merge ?\r\n\r\ndone. All good... | 2026-02-21T14:11:40 | 2026-02-25T20:07:50 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8014",
"html_url": "https://github.com/huggingface/datasets/pull/8014",
"diff_url": "https://github.com/huggingface/datasets/pull/8014.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8014.patch",
"merged_at": null
} | This PR speeds up `get_data_patterns` for local paths.
### Problem
For `load_dataset("imagefolder", data_dir=...)`, `get_data_patterns` was repeatedly scanning the same local directory tree for many split patterns (`train`, `test`, etc.).
With large folders, this became very slow. This has also been reported before in earlier performance discussions/issues.
### Change
In `get_data_patterns` (local paths only):
- scan files once (`resolve_pattern("**", ...)`)
- then match split patterns in memory
Remote paths keep the old behavior.
No API changes.
### My Env
- Mac mini (M4), 16 GB RAM
- Python 3.13
- Datasets version: 4.5.0
### Benchmarks
`imagefolder` with local `.jpg` files
`data_dir_only`:
- 10k: `4.35s -> 1.40s` (3.10x)
- 100k: `33.48s -> 9.77s` (3.43x)
- 300k: `160.20s -> 35.81s` (4.47x)
- 1M: `1877.70s -> 164.17s` (11.44x) :fire:
`explicit_data_files`:
- 10k: `0.75s -> 0.79s`
- 100k: `7.23s -> 7.29s`
- 300k: `25.44s -> 24.26s`
- 1M: `115.85s -> 112.66s`
As expected, the improvement is on `data_dir_only` (auto pattern detection path).
Memory usage did not show a consistent regression in these runs and stayed within normal run-to-run variance for this benchmark setup. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8014/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8013 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8013/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8013/comments | https://api.github.com/repos/huggingface/datasets/issues/8013/events | https://github.com/huggingface/datasets/pull/8013 | 3,964,156,070 | PR_kwDODunzps7E8Ovb | 8,013 | Add space between sentences 'operation.To debug' | {
"login": "valencik",
"id": 5440389,
"node_id": "MDQ6VXNlcjU0NDAzODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5440389?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/valencik",
"html_url": "https://github.com/valencik",
"followers_url": "https://api.github.com/users/valencik/followers",
"following_url": "https://api.github.com/users/valencik/following{/other_user}",
"gists_url": "https://api.github.com/users/valencik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/valencik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/valencik/subscriptions",
"organizations_url": "https://api.github.com/users/valencik/orgs",
"repos_url": "https://api.github.com/users/valencik/repos",
"events_url": "https://api.github.com/users/valencik/events{/privacy}",
"received_events_url": "https://api.github.com/users/valencik/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [] | 2026-02-19T17:31:56 | 2026-02-19T17:31:56 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8013",
"html_url": "https://github.com/huggingface/datasets/pull/8013",
"diff_url": "https://github.com/huggingface/datasets/pull/8013.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8013.patch",
"merged_at": null
} | Previously the exception message would read "...abruptly died during map _operation.To_ debug the error...".
The `operation.To` part would actually render as a hyperlink when printed to Slack. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8013/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8012 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8012/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8012/comments | https://api.github.com/repos/huggingface/datasets/issues/8012/events | https://github.com/huggingface/datasets/issues/8012 | 3,963,855,875 | I_kwDODunzps7sQ6QD | 8,012 | HC3 dataset fails to load with datasets>=3 due to legacy script (HC3.py) | {
"login": "navan0",
"id": 15803320,
"node_id": "MDQ6VXNlcjE1ODAzMzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/15803320?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/navan0",
"html_url": "https://github.com/navan0",
"followers_url": "https://api.github.com/users/navan0/followers",
"following_url": "https://api.github.com/users/navan0/following{/other_user}",
"gists_url": "https://api.github.com/users/navan0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/navan0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/navan0/subscriptions",
"organizations_url": "https://api.github.com/users/navan0/orgs",
"repos_url": "https://api.github.com/users/navan0/repos",
"events_url": "https://api.github.com/users/navan0/events{/privacy}",
"received_events_url": "https://api.github.com/users/navan0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | [] | null | [
"This is not a bug. This is intended behaviour. The only ways to resolve this are\n1. use the old `datasets` versions\n2. load the dataset and upload it to your own repo without a script\n3. raise an issue in the dataset and wait for the owner to update it."
] | 2026-02-19T16:27:51 | 2026-02-20T11:58:20 | 2026-02-20T11:58:20 | null | NONE | null | null | null | null | ### Describe the bug
It looks like the current documentation suggests loading the dataset like this:
`load_dataset("Hello-SimpleAI/HC3", split="train")
`
But with datasets>=3.x, this raises:
RuntimeError: Dataset scripts are no longer supported, but found HC3.py
The dataset loads correctly if we specify the parquet export branch:
```
load_dataset(
"Hello-SimpleAI/HC3",
revision="refs/convert/parquet",
split="train"
)
```
Since dataset scripts are no longer supported in newer versions of datasets, it might be better to update the documentation to reflect the parquet-based loading method .
Just wanted to flag this so others don’t run into the same confusion.
### Steps to reproduce the bug
### Steps to reproduce
Using `datasets>=3.x`:
```python
from datasets import load_dataset
ds = load_dataset("Hello-SimpleAI/HC3", split="train")
```
### Error
```
RuntimeError: Dataset scripts are no longer supported, but found HC3.py
```
This appears to happen because the repository still contains a legacy loading script (`HC3.py`), and recent versions of `datasets` no longer support dataset scripts.
### Working alternative
The dataset loads correctly when specifying the parquet export revision:
```python
from datasets import load_dataset
ds = load_dataset(
"Hello-SimpleAI/HC3",
revision="refs/convert/parquet",
split="train"
)
print(ds)
```
This successfully downloads and loads the dataset.
### Expected behavior
### Expected behavior
The dataset should load successfully using the example shown in the documentation:
```python
load_dataset("Hello-SimpleAI/HC3", split="train")
```
without requiring a specific `revision` parameter.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-6.6.105+-x86_64-with-glibc2.35
- Python version: 3.12.12
- `huggingface_hub` version: 1.4.1
- PyArrow version: 18.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2025.3.0 | {
"login": "navan0",
"id": 15803320,
"node_id": "MDQ6VXNlcjE1ODAzMzIw",
"avatar_url": "https://avatars.githubusercontent.com/u/15803320?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/navan0",
"html_url": "https://github.com/navan0",
"followers_url": "https://api.github.com/users/navan0/followers",
"following_url": "https://api.github.com/users/navan0/following{/other_user}",
"gists_url": "https://api.github.com/users/navan0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/navan0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/navan0/subscriptions",
"organizations_url": "https://api.github.com/users/navan0/orgs",
"repos_url": "https://api.github.com/users/navan0/repos",
"events_url": "https://api.github.com/users/navan0/events{/privacy}",
"received_events_url": "https://api.github.com/users/navan0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8012/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | null | false |
https://api.github.com/repos/huggingface/datasets/issues/8011 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8011/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8011/comments | https://api.github.com/repos/huggingface/datasets/issues/8011/events | https://github.com/huggingface/datasets/pull/8011 | 3,963,479,811 | PR_kwDODunzps7E59Pn | 8,011 | fix: condition for file_name feature check | {
"login": "Koosh0610",
"id": 119339573,
"node_id": "U_kgDOBxz6NQ",
"avatar_url": "https://avatars.githubusercontent.com/u/119339573?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Koosh0610",
"html_url": "https://github.com/Koosh0610",
"followers_url": "https://api.github.com/users/Koosh0610/followers",
"following_url": "https://api.github.com/users/Koosh0610/following{/other_user}",
"gists_url": "https://api.github.com/users/Koosh0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Koosh0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Koosh0610/subscriptions",
"organizations_url": "https://api.github.com/users/Koosh0610/orgs",
"repos_url": "https://api.github.com/users/Koosh0610/repos",
"events_url": "https://api.github.com/users/Koosh0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/Koosh0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | [] | null | [
"Hi @lhoestq, have proposed a fix for [#8010](https://github.com/huggingface/datasets/issues/8010)"
] | 2026-02-19T15:12:31 | 2026-02-19T15:29:29 | 2026-02-19T15:29:29 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8011",
"html_url": "https://github.com/huggingface/datasets/pull/8011",
"diff_url": "https://github.com/huggingface/datasets/pull/8011.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8011.patch",
"merged_at": null
} | ## Description
Accept `Value("large_string")` in addition to `Value("string")` for the `file_name` (or `*_file_name`) metadata key in folder-based builders (e.g. `audiofolder`).
## Problem
When loading a dataset with `load_dataset("audiofolder", data_dir=...)` and a `metadata.csv` file, the metadata schema is inferred via pandas to Arrow. For string columns, Arrow often uses the `large_string` type, so `Features.from_arrow_schema()` yields `Value("large_string")` for the `file_name` column. The builder only checks for `Value("string")`, so validation fails with:
`ValueError: file_name or *_file_name must be present as dictionary key (with type string) in metadata files`
Users then have to convert metadata to JSONL (which produces Arrow `string`) or work around the loader.
## Solution
Treat `Value("large_string")` as valid for the `file_name` / `*_file_name` key, since it is still a string type and behaves the same for resolution of audio/image paths. This matches the fact that when metadata is loaded from CSV (via pandas and then converted to Arrow), string columns are often inferred as Arrow `large_string`, so accepting it keeps CSV metadata working without format changes. | {
"login": "Koosh0610",
"id": 119339573,
"node_id": "U_kgDOBxz6NQ",
"avatar_url": "https://avatars.githubusercontent.com/u/119339573?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Koosh0610",
"html_url": "https://github.com/Koosh0610",
"followers_url": "https://api.github.com/users/Koosh0610/followers",
"following_url": "https://api.github.com/users/Koosh0610/following{/other_user}",
"gists_url": "https://api.github.com/users/Koosh0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Koosh0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Koosh0610/subscriptions",
"organizations_url": "https://api.github.com/users/Koosh0610/orgs",
"repos_url": "https://api.github.com/users/Koosh0610/repos",
"events_url": "https://api.github.com/users/Koosh0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/Koosh0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8011/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8010 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8010/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8010/comments | https://api.github.com/repos/huggingface/datasets/issues/8010/events | https://github.com/huggingface/datasets/issues/8010 | 3,963,448,558 | I_kwDODunzps7sPWzu | 8,010 | audiofolder / folder-based loader rejects metadata.csv when file_name column is inferred as large_string | {
"login": "Koosh0610",
"id": 119339573,
"node_id": "U_kgDOBxz6NQ",
"avatar_url": "https://avatars.githubusercontent.com/u/119339573?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Koosh0610",
"html_url": "https://github.com/Koosh0610",
"followers_url": "https://api.github.com/users/Koosh0610/followers",
"following_url": "https://api.github.com/users/Koosh0610/following{/other_user}",
"gists_url": "https://api.github.com/users/Koosh0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Koosh0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Koosh0610/subscriptions",
"organizations_url": "https://api.github.com/users/Koosh0610/orgs",
"repos_url": "https://api.github.com/users/Koosh0610/repos",
"events_url": "https://api.github.com/users/Koosh0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/Koosh0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | [] | null | [
"Proposing a PR for this [here](https://github.com/huggingface/datasets/pull/8011)\n\nPS: I didn't find a contributing documentation. Please recommend any changes in issue or PR if any. Thank you!"
] | 2026-02-19T15:06:09 | 2026-02-19T15:29:41 | 2026-02-19T15:29:41 | null | NONE | null | null | null | null | ### Describe the bug
When loading a dataset with load_dataset("audiofolder", data_dir=...) and a metadata.csv (with a file_name column), loading can fail with:
`ValueError: file_name or *_file_name must be present as dictionary key (with type string) in metadata files`
even when the CSV clearly has a file_name column with string values.
**Cause**
In folder_based_builder.py, the metadata schema is inferred from the CSV via pandas to Arrow. For string columns, Arrow often uses the large_string type. The check only accepts Value("string") and does not accept Value("large_string"), so validation fails even though the column is a valid string column.
### Steps to reproduce the bug
1. Create a folder with:
- `metadata.csv` with columns: `file_name`, `transcript`, `duration` (and audio `.wav` files referenced by `file_name`).
2. Run:
```
from datasets import load_dataset
ds = load_dataset("audiofolder", data_dir="/path/to/folder")
```
### Expected behavior
Metadata CSV with a `file_name` column (inferred as either `string` or `large_string`) should be accepted, since both represent string data.
### Environment info
- `datasets` version: 4.5.0
- Platform: Linux-6.8.0-100-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.36.0
- PyArrow version: 23.0.0
- Pandas version: 3.0.0
- `fsspec` version: 2025.10.0 | {
"login": "Koosh0610",
"id": 119339573,
"node_id": "U_kgDOBxz6NQ",
"avatar_url": "https://avatars.githubusercontent.com/u/119339573?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Koosh0610",
"html_url": "https://github.com/Koosh0610",
"followers_url": "https://api.github.com/users/Koosh0610/followers",
"following_url": "https://api.github.com/users/Koosh0610/following{/other_user}",
"gists_url": "https://api.github.com/users/Koosh0610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Koosh0610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Koosh0610/subscriptions",
"organizations_url": "https://api.github.com/users/Koosh0610/orgs",
"repos_url": "https://api.github.com/users/Koosh0610/repos",
"events_url": "https://api.github.com/users/Koosh0610/events{/privacy}",
"received_events_url": "https://api.github.com/users/Koosh0610/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8010/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | null | false |
https://api.github.com/repos/huggingface/datasets/issues/8009 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8009/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8009/comments | https://api.github.com/repos/huggingface/datasets/issues/8009/events | https://github.com/huggingface/datasets/pull/8009 | 3,958,228,652 | PR_kwDODunzps7Eo7KV | 8,009 | More IterableDataset.from_x methods and docs and polars.Lazyframe support | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_8009). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-18T14:48:39 | 2026-02-18T15:18:17 | 2026-02-18T15:18:14 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8009",
"html_url": "https://github.com/huggingface/datasets/pull/8009",
"diff_url": "https://github.com/huggingface/datasets/pull/8009.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8009.patch",
"merged_at": "2026-02-18T15:18:14"
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8009/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8008 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8008/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8008/comments | https://api.github.com/repos/huggingface/datasets/issues/8008/events | https://github.com/huggingface/datasets/pull/8008 | 3,948,857,401 | PR_kwDODunzps7EJ_oM | 8,008 | fix: prevent duplicate keywords in load_dataset_builder (#4910) | {
"login": "DhyeyTeraiya",
"id": 190333801,
"node_id": "U_kgDOC1hDaQ",
"avatar_url": "https://avatars.githubusercontent.com/u/190333801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DhyeyTeraiya",
"html_url": "https://github.com/DhyeyTeraiya",
"followers_url": "https://api.github.com/users/DhyeyTeraiya/followers",
"following_url": "https://api.github.com/users/DhyeyTeraiya/following{/other_user}",
"gists_url": "https://api.github.com/users/DhyeyTeraiya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DhyeyTeraiya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DhyeyTeraiya/subscriptions",
"organizations_url": "https://api.github.com/users/DhyeyTeraiya/orgs",
"repos_url": "https://api.github.com/users/DhyeyTeraiya/repos",
"events_url": "https://api.github.com/users/DhyeyTeraiya/events{/privacy}",
"received_events_url": "https://api.github.com/users/DhyeyTeraiya/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [] | 2026-02-16T17:42:30 | 2026-02-16T17:42:30 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8008",
"html_url": "https://github.com/huggingface/datasets/pull/8008",
"diff_url": "https://github.com/huggingface/datasets/pull/8008.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8008.patch",
"merged_at": null
} | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8008/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8007 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8007/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8007/comments | https://api.github.com/repos/huggingface/datasets/issues/8007/events | https://github.com/huggingface/datasets/issues/8007 | 3,946,695,329 | I_kwDODunzps7rPcqh | 8,007 | Add option for loading audio with video | {
"login": "Samoed",
"id": 36135455,
"node_id": "MDQ6VXNlcjM2MTM1NDU1",
"avatar_url": "https://avatars.githubusercontent.com/u/36135455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Samoed",
"html_url": "https://github.com/Samoed",
"followers_url": "https://api.github.com/users/Samoed/followers",
"following_url": "https://api.github.com/users/Samoed/following{/other_user}",
"gists_url": "https://api.github.com/users/Samoed/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Samoed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Samoed/subscriptions",
"organizations_url": "https://api.github.com/users/Samoed/orgs",
"repos_url": "https://api.github.com/users/Samoed/repos",
"events_url": "https://api.github.com/users/Samoed/events{/privacy}",
"received_events_url": "https://api.github.com/users/Samoed/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [
"Hi,\nI’m interested in working on this issue.\n\nFrom what I understand, audio decoding would involve an AudioDecoder, and previous discussions have suggested handling it separately. I’d like to explore whether integrating audio extraction in relation to the current video decoding workflow would be appropriate, or... | 2026-02-16T09:11:02 | 2026-02-16T15:56:21 | null | null | NONE | null | null | null | null | ### Describe the bug
Currently, `torchcodec` don't allow extracting `Audio` from `Video` https://github.com/meta-pytorch/torchcodec/issues/1158, but when I upload videos with audio to hub using `videofolder`, then this is not possible to retrieve audio from it. Probably `VideoDecoder` can be extended with `audio` parameter to retrieve this information
### Steps to reproduce the bug
```python
from datasets import load_dataset
test_ds = load_dataset("videofolder", data_dir="/path/to/video")
# uploaded version
# test_ds = load_dataset("Samoed/testds")
test_ds["train"][1]["video"]
# <torchcodec.decoders._video_decoder.VideoDecoder
```
### Expected behavior
Some option to retrieve audio information.
### Environment info
```
datasets==4.5.0
``` | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8007/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | null | false |
https://api.github.com/repos/huggingface/datasets/issues/8006 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8006/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8006/comments | https://api.github.com/repos/huggingface/datasets/issues/8006/events | https://github.com/huggingface/datasets/issues/8006 | 3,944,394,074 | I_kwDODunzps7rGq1a | 8,006 | Regression: from_generator crashes if one generator call returns no results | {
"login": "hartmans",
"id": 53510,
"node_id": "MDQ6VXNlcjUzNTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/53510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hartmans",
"html_url": "https://github.com/hartmans",
"followers_url": "https://api.github.com/users/hartmans/followers",
"following_url": "https://api.github.com/users/hartmans/following{/other_user}",
"gists_url": "https://api.github.com/users/hartmans/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hartmans/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hartmans/subscriptions",
"organizations_url": "https://api.github.com/users/hartmans/orgs",
"repos_url": "https://api.github.com/users/hartmans/repos",
"events_url": "https://api.github.com/users/hartmans/events{/privacy}",
"received_events_url": "https://api.github.com/users/hartmans/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | [] | null | [] | 2026-02-15T16:16:48 | 2026-02-24T13:58:12 | 2026-02-24T13:58:12 | null | CONTRIBUTOR | null | null | null | null | ### Describe the bug
Dataset.from_generator splits any list kwarg to the generator and makes a separate call to the generator for each member of the list even in the num_proc=1 case.
I have a generator that processes a number of files, filtering them and producing examples. It used to be the case that things worked fine if one of the files produced no examples.
It doesn't work any more.
I believe that commit 2ed6f72d88c0f37b75751cd0cd41a485439e16c9 is responsible.
What ends up happening is that the code tries to update input shard lengths but the list of shard lengths is still empty.
### Steps to reproduce the bug
```python
import datasets
def gen_examples(num_examples, fingerprint):
assert len(num_examples) == 1
for x in range(num_examples[0]):
yield dict(feature=x)
ds = datasets.Dataset.from_generator(gen_examples, gen_kwargs=dict(num_examples=[0,1],
fingerprint=str(id([]))))
```
Traceback looks like
```
Generating train split: 0 examples [00:00, ? examples/s]
Generating train split: 0 examples [00:00, ? examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.13/dist-packages/datasets/builder.py", line 1598, in _prepare_split_single
original_shard_lengths[original_shard_id] += 1
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
IndexError: list index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/srv/models/zendegi_ai/sexpositive-sft/ai-tools/training_tools/foo.py", line 10, in <module>
ds = datasets.Dataset.from_generator(gen_examples, gen_kwargs=dict(num_examples=[0,1],
fingerprint=str(id([]))))
File "/usr/local/lib/python3.13/dist-packages/datasets/arrow_dataset.py", line 1204, in from_generator
).read()
~~~~^^
File "/usr/local/lib/python3.13/dist-packages/datasets/io/generator.py", line 52, in read
self.builder.download_and_prepare(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
download_config=download_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
num_proc=self.num_proc,
^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/usr/local/lib/python3.13/dist-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
dl_manager=dl_manager,
^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
**download_and_prepare_kwargs,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/usr/local/lib/python3.13/dist-packages/datasets/builder.py", line 1634, in _download_and_prepare
super()._download_and_prepare(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
dl_manager,
^^^^^^^^^^^
verification_mode,
^^^^^^^^^^^^^^^^^^
**prepare_splits_kwargs,
^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/usr/local/lib/python3.13/dist-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/dist-packages/datasets/builder.py", line 1438, in _prepare_split
for job_id, done, content in self._prepare_split_single(
~~~~~~~~~~~~~~~~~~~~~~~~~~^
gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
):
^
File "/usr/local/lib/python3.13/dist-packages/datasets/builder.py", line 1617, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
I expect a dataset with one example.
In my case it's fairly easy to work around this. Ideally this behavior would be supported.
If it's not going to be supported, catching the situation and returning a "generators must always return at least one example" error would have saved me hours of debugging.
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 4.5.0
- Platform: Linux-6.18.5+deb14-amd64-x86_64-with-glibc2.41
- Python version: 3.13.5
- `huggingface_hub` version: 1.4.1
- PyArrow version: 23.0.0
- Pandas version: 3.0.0
- `fsspec` version: 2025.10.0
| {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8006/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | null | false |
https://api.github.com/repos/huggingface/datasets/issues/8005 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8005/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8005/comments | https://api.github.com/repos/huggingface/datasets/issues/8005/events | https://github.com/huggingface/datasets/issues/8005 | 3,941,908,297 | I_kwDODunzps7q9L9J | 8,005 | Multi-channel audio is automatically cast to mono, num_channels is ignored | {
"login": "ZackHodari",
"id": 9717211,
"node_id": "MDQ6VXNlcjk3MTcyMTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9717211?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZackHodari",
"html_url": "https://github.com/ZackHodari",
"followers_url": "https://api.github.com/users/ZackHodari/followers",
"following_url": "https://api.github.com/users/ZackHodari/following{/other_user}",
"gists_url": "https://api.github.com/users/ZackHodari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZackHodari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZackHodari/subscriptions",
"organizations_url": "https://api.github.com/users/ZackHodari/orgs",
"repos_url": "https://api.github.com/users/ZackHodari/repos",
"events_url": "https://api.github.com/users/ZackHodari/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZackHodari/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [
"**Workaround**\nDirectly load audio using torchcodec, this is what datasets does under the hood (but doesn't maintain multi-channel)\n\n```python\nimport torchcodec\n\ndecoder = torchcodec.decoders.AudioDecoder(audio[\"bytes\"])\naudio_samples = decoder.get_all_samples()\n\naudio = audio_samples.data.numpy()\nsamp... | 2026-02-14T17:28:03 | 2026-02-25T15:17:18 | null | null | NONE | null | null | null | null | ### Describe the bug
The `num_channels` parameter in `datasets.Audio()` is documented to preserve stereo channels when set to `None` (preserve original) or `2` (explicit stereo), but it currently downmixes all audio to mono regardless of this setting.
### Steps to reproduce the bug
```python
import numpy as np
import soundfile as sf
import tempfile
from datasets import Dataset, Audio
# Create a stereo audio file
sample_rate = 16000
duration = 1.0
num_samples = int(sample_rate * duration)
left_channel = np.sin(2 * np.pi * 440 * np.linspace(0, duration, num_samples))
right_channel = np.sin(2 * np.pi * 880 * np.linspace(0, duration, num_samples))
stereo_audio = np.stack([left_channel, right_channel], axis=1).astype(np.float32)
temp_file = tempfile.NamedTemporaryFile(suffix=".wav", delete=False)
sf.write(temp_file.name, stereo_audio, sample_rate)
# Create HuggingFace dataset
dataset_dict = {"audio": [temp_file.name]}
ds = Dataset.from_dict(dataset_dict)
# Test with num_channels=2
ds_stereo = ds.cast_column("audio", Audio(num_channels=2))
audio_data = ds_stereo[0]["audio"]
print(f"Original file shape (via soundfile): {sf.read(temp_file.name)[0].shape}")
# Output: (16000, 2) ✓ Stereo
print(f"HF datasets shape with num_channels=2: {audio_data['array'].shape}")
# Output: (16000,) ✗ Mono (should be (2, 16000))
```
**Result:**
- Original file: `(16000, 2)` - stereo ✓
- `Audio(num_channels=None)`: `(16000,)` - mono ✗
- `Audio(num_channels=2)`: `(16000,)` - mono ✗
- `Audio(num_channels=1)`: `(16000,)` - mono ✓
### Expected behavior
According to the documentation, `Audio` decoding should return samples with shape `(num_channels, num_samples)`:
- `num_channels=None` should preserve the original number of channels from the source file
- `num_channels=2` should preserve/convert to stereo output with shape `(2, num_samples)`
- `num_channels=1` should downmix to mono with shape `(num_samples,)`
**Actual Behavior**
All `num_channels` settings produce mono output with shape `(num_samples,)`, even when the source audio file is stereo.
### Environment info
OS: macOS / Linux
Python 3.10.19
```
datasets 4.4.2
torchcodec 0.10.0
``` | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8005/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8005/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | null | false |
https://api.github.com/repos/huggingface/datasets/issues/8004 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8004/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8004/comments | https://api.github.com/repos/huggingface/datasets/issues/8004/events | https://github.com/huggingface/datasets/pull/8004 | 3,939,675,475 | PR_kwDODunzps7DsCBM | 8,004 | fix save_to_disk/load_from_disk with pathlib.Path input | {
"login": "Mr-Neutr0n",
"id": 64578610,
"node_id": "MDQ6VXNlcjY0NTc4NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/64578610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mr-Neutr0n",
"html_url": "https://github.com/Mr-Neutr0n",
"followers_url": "https://api.github.com/users/Mr-Neutr0n/followers",
"following_url": "https://api.github.com/users/Mr-Neutr0n/following{/other_user}",
"gists_url": "https://api.github.com/users/Mr-Neutr0n/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mr-Neutr0n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mr-Neutr0n/subscriptions",
"organizations_url": "https://api.github.com/users/Mr-Neutr0n/orgs",
"repos_url": "https://api.github.com/users/Mr-Neutr0n/repos",
"events_url": "https://api.github.com/users/Mr-Neutr0n/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mr-Neutr0n/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | [] | null | [] | 2026-02-13T23:03:55 | 2026-02-13T23:03:55 | null | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8004",
"html_url": "https://github.com/huggingface/datasets/pull/8004",
"diff_url": "https://github.com/huggingface/datasets/pull/8004.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8004.patch",
"merged_at": null
} | Since #6704, `save_to_disk` and `load_from_disk` use `fsspec.core.url_to_fs` which expects a `str`, but both methods accept `PathLike` (which includes `pathlib.Path`). Passing a `Path` object raises a `TypeError` because `url_to_fs` can't handle it.
Fixed by converting the path argument with `os.fspath()` before handing it off to `url_to_fs`. This affects all five call sites across `Dataset`, `DatasetDict`, and the standalone `load_from_disk` function.
Fixes #6829 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8004/timeline | null | null | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/8003 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/8003/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/8003/comments | https://api.github.com/repos/huggingface/datasets/issues/8003/events | https://github.com/huggingface/datasets/pull/8003 | 3,938,501,557 | PR_kwDODunzps7DoBmp | 8,003 | very basic support for more hf urls | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_8003). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-13T18:30:56 | 2026-02-13T18:47:58 | 2026-02-13T18:47:57 | null | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/8003",
"html_url": "https://github.com/huggingface/datasets/pull/8003",
"diff_url": "https://github.com/huggingface/datasets/pull/8003.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/8003.patch",
"merged_at": "2026-02-13T18:47:57"
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/8003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/8003/timeline | null | null | null | null | null | true |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 8