url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
null | comments
list | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
string | type
null | active_lock_reason
null | sub_issues_summary
dict | issue_dependencies_summary
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
null | state_reason
string | draft
bool | pull_request
dict | is_pull_request
bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7999
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7999/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7999/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7999/events
|
https://github.com/huggingface/datasets/issues/7999
| 3,915,367,642
|
I_kwDODunzps7pX8Ta
| 7,999
|
Too many dataloader workers: 4 (max is dataset.num_shards=3). Stopping 1 dataloader workers.
|
{
"login": "D222097",
"id": 50061868,
"node_id": "MDQ6VXNlcjUwMDYxODY4",
"avatar_url": "https://avatars.githubusercontent.com/u/50061868?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/D222097",
"html_url": "https://github.com/D222097",
"followers_url": "https://api.github.com/users/D222097/followers",
"following_url": "https://api.github.com/users/D222097/following{/other_user}",
"gists_url": "https://api.github.com/users/D222097/gists{/gist_id}",
"starred_url": "https://api.github.com/users/D222097/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/D222097/subscriptions",
"organizations_url": "https://api.github.com/users/D222097/orgs",
"repos_url": "https://api.github.com/users/D222097/repos",
"events_url": "https://api.github.com/users/D222097/events{/privacy}",
"received_events_url": "https://api.github.com/users/D222097/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-09T09:26:37
| 2026-02-09T10:13:25
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
Hi !
I’m working on training with a large-scale dataset (100+ Parquet files) using lazy loading, and I’m struggling to understand/optimize the num_shards setting— in the lerobot repo: streaming_datasets.py:
```
from datasets import load_dataset
self.hf_dataset: datasets.IterableDataset = load_dataset(
self.repo_id if not self.streaming_from_local else str(self.root),
split="train",
streaming=self.streaming,
data_files="data/*/*.parquet",
revision=self.revision,
)
self.num_shards = min(self.hf_dataset.num_shards, max_num_shards)
```
```
dataloader = torch.utils.data.DataLoader(
datasets[sub_idx],
num_workers=datasets[sub_idx].num_shards, #cfg.num_workers,
batch_size=cfg.batch_size,
shuffle=shuffle and not cfg.dataset.streaming,
sampler=sampler,
collate_fn=FlowerDataCollator(),
pin_memory=device.type == "cuda",
drop_last=True,
prefetch_factor=2 if cfg.num_workers > 0 else None,
)
```
What exactly does hf_dataset.**num_shards** represent? Is it safe to manually override/edit num_shards?
My batch loading is slower than expected (2-3s per batch) despite num_worker cannot be bigger with warning: Too many dataloader workers: 4 (max is dataset.num_shards=3). Stopping 1 dataloader workers.
Even use num_workers=datasets[sub_idx].num_shards, the waring is still exist! (my num_worker is 4 and hf_dataset.num_shards is 100+, so the datasets[sub_idx].num_shards=4)
Why does the "too many workers" warning persist even when num_workers equals dataset.num_shards—and how do I fix this?
Thanks so much for any insights or help with this! Really appreciate your time and expertise 😊
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7999/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7998
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7998/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7998/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7998/events
|
https://github.com/huggingface/datasets/issues/7998
| 3,912,624,238
|
I_kwDODunzps7pNehu
| 7,998
|
[doc] Inconsistant ENV VAR Name for Progress Bar Toggle
|
{
"login": "Moenupa",
"id": 49304833,
"node_id": "MDQ6VXNlcjQ5MzA0ODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/49304833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Moenupa",
"html_url": "https://github.com/Moenupa",
"followers_url": "https://api.github.com/users/Moenupa/followers",
"following_url": "https://api.github.com/users/Moenupa/following{/other_user}",
"gists_url": "https://api.github.com/users/Moenupa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Moenupa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moenupa/subscriptions",
"organizations_url": "https://api.github.com/users/Moenupa/orgs",
"repos_url": "https://api.github.com/users/Moenupa/repos",
"events_url": "https://api.github.com/users/Moenupa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Moenupa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-08T12:16:44
| 2026-02-08T12:16:44
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
Code uses env var name `HF_DATASETS_DISABLE_PROGRESS_BARS`.
https://github.com/huggingface/datasets/blob/025593f2f0722f31fc136e0ae45da4ff44d4416a/src/datasets/config.py#L221-L226
Docstrings and warnings report env var name `HF_DATASETS_DISABLE_PROGRESS_BAR` without the ending `S`.
https://github.com/huggingface/datasets/blob/025593f2f0722f31fc136e0ae45da4ff44d4416a/src/datasets/utils/tqdm.py#L61-L73
https://github.com/huggingface/datasets/blob/025593f2f0722f31fc136e0ae45da4ff44d4416a/src/datasets/utils/tqdm.py#L78-L90
https://github.com/huggingface/datasets/blob/025593f2f0722f31fc136e0ae45da4ff44d4416a/src/datasets/utils/tqdm.py#L95-L100
This affects doc webpages as well, e.g., see https://huggingface.co/docs/datasets/en/package_reference/utilities#datasets.enable_progress_bars.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7998/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7997
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7997/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7997/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7997/events
|
https://github.com/huggingface/datasets/pull/7997
| 3,912,160,109
|
PR_kwDODunzps7CRNIl
| 7,997
|
fix: Dataset.map writer initialization when first examples return None
|
{
"login": "veeceey",
"id": 34209028,
"node_id": "MDQ6VXNlcjM0MjA5MDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/34209028?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/veeceey",
"html_url": "https://github.com/veeceey",
"followers_url": "https://api.github.com/users/veeceey/followers",
"following_url": "https://api.github.com/users/veeceey/following{/other_user}",
"gists_url": "https://api.github.com/users/veeceey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/veeceey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/veeceey/subscriptions",
"organizations_url": "https://api.github.com/users/veeceey/orgs",
"repos_url": "https://api.github.com/users/veeceey/repos",
"events_url": "https://api.github.com/users/veeceey/events{/privacy}",
"received_events_url": "https://api.github.com/users/veeceey/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-08T07:02:00
| 2026-02-08T07:02:00
| null |
NONE
| null | null | null | null |
Fixes #7990
## Summary
When Dataset.map is called and the first examples processed return None, the writer is never properly initialized, causing a ValueError.
## Changes
- Modified _map_single to initialize the writer early if the first batch returns empty results
- Ensures writer is set before the first call to writer.write_batch
## Test Plan
- Added test case that reproduces the bug
- Verified the fix resolves the issue
- Existing tests still pass
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7997/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7997",
"html_url": "https://github.com/huggingface/datasets/pull/7997",
"diff_url": "https://github.com/huggingface/datasets/pull/7997.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7997.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7996
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7996/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7996/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7996/events
|
https://github.com/huggingface/datasets/pull/7996
| 3,912,066,322
|
PR_kwDODunzps7CQ6GC
| 7,996
|
Fix Dataset.map writer initialization when early examples return None
|
{
"login": "veeceey",
"id": 34209028,
"node_id": "MDQ6VXNlcjM0MjA5MDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/34209028?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/veeceey",
"html_url": "https://github.com/veeceey",
"followers_url": "https://api.github.com/users/veeceey/followers",
"following_url": "https://api.github.com/users/veeceey/following{/other_user}",
"gists_url": "https://api.github.com/users/veeceey/gists{/gist_id}",
"starred_url": "https://api.github.com/users/veeceey/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/veeceey/subscriptions",
"organizations_url": "https://api.github.com/users/veeceey/orgs",
"repos_url": "https://api.github.com/users/veeceey/repos",
"events_url": "https://api.github.com/users/veeceey/events{/privacy}",
"received_events_url": "https://api.github.com/users/veeceey/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-08T05:52:45
| 2026-02-08T05:52:45
| null |
NONE
| null | null | null | null |
## Summary
Fixes #7990
This PR fixes a bug in `Dataset.map()` where the writer initialization was incorrectly tied to the index being 0, causing crashes when the map function returns `None` for the first few examples and later returns a dict.
### Changes
- **Non-batched mode** (line 3676): Changed from `if i == 0:` to `if writer is None:`
- **Batched mode** (line 3701): Changed from `if i and i[0] == 0:` to `if writer is None:`
### Why This Fix Works
The original code assumed that `update_data` would always be determined by the time the first example (i=0) was processed. However, `update_data` is set lazily after processing each example - it becomes `True` when the function first returns a non-None value.
If a function returns `None` for early examples and a dict for later ones:
1. At i=0, the function returns `None`, so `update_data` remains `None`
2. Writer is NOT initialized (because we're not updating data)
3. At i=2, the function returns a dict, so `update_data` becomes `True`
4. **Old code**: Tries to use `writer` (still None) because i != 0 → crash
5. **New code**: Checks `if writer is None` and initializes it → works correctly
### Test Plan
The fix can be verified with this minimal test case from the issue:
```python
from datasets import Dataset
ds = Dataset.from_dict({"x": [1, 2, 3]})
def fn(example, idx):
if idx < 2:
return None
return {"x": [example["x"] * 10]}
# Should work without errors
result = list(ds.map(fn, with_indices=True))
print(result) # [{'x': 1}, {'x': 2}, {'x': [30]}]
```
**Before this fix**: Crashes with `AttributeError: 'NoneType' object has no attribute 'write'`
**After this fix**: Works correctly
### Related
This fix ensures the writer is initialized the first time a non-None value is returned, regardless of which example index that occurs at. This makes the code more robust to different map function behaviors.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7996/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7996",
"html_url": "https://github.com/huggingface/datasets/pull/7996",
"diff_url": "https://github.com/huggingface/datasets/pull/7996.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7996.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7995
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7995/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7995/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7995/events
|
https://github.com/huggingface/datasets/pull/7995
| 3,912,055,867
|
PR_kwDODunzps7CQ4C9
| 7,995
|
Bump fsspec upper bound to 2026.2.0 (fixes #7994)
|
{
"login": "jayzuccarelli",
"id": 11176606,
"node_id": "MDQ6VXNlcjExMTc2NjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/11176606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jayzuccarelli",
"html_url": "https://github.com/jayzuccarelli",
"followers_url": "https://api.github.com/users/jayzuccarelli/followers",
"following_url": "https://api.github.com/users/jayzuccarelli/following{/other_user}",
"gists_url": "https://api.github.com/users/jayzuccarelli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jayzuccarelli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jayzuccarelli/subscriptions",
"organizations_url": "https://api.github.com/users/jayzuccarelli/orgs",
"repos_url": "https://api.github.com/users/jayzuccarelli/repos",
"events_url": "https://api.github.com/users/jayzuccarelli/events{/privacy}",
"received_events_url": "https://api.github.com/users/jayzuccarelli/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-08T05:43:15
| 2026-02-08T05:43:15
| null |
NONE
| null | null | null | null |
Fixes #7994 and e.g. “Bumps fsspec upper bound so the latest version can be used; CI will validate compatibility.”
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7995/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7995",
"html_url": "https://github.com/huggingface/datasets/pull/7995",
"diff_url": "https://github.com/huggingface/datasets/pull/7995.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7995.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7994
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7994/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7994/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7994/events
|
https://github.com/huggingface/datasets/issues/7994
| 3,906,330,806
|
I_kwDODunzps7o1eC2
| 7,994
|
Bump fsspec upper bound constraint
|
{
"login": "hadim",
"id": 528003,
"node_id": "MDQ6VXNlcjUyODAwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/528003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hadim",
"html_url": "https://github.com/hadim",
"followers_url": "https://api.github.com/users/hadim/followers",
"following_url": "https://api.github.com/users/hadim/following{/other_user}",
"gists_url": "https://api.github.com/users/hadim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hadim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hadim/subscriptions",
"organizations_url": "https://api.github.com/users/hadim/orgs",
"repos_url": "https://api.github.com/users/hadim/repos",
"events_url": "https://api.github.com/users/hadim/events{/privacy}",
"received_events_url": "https://api.github.com/users/hadim/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-06T11:37:54
| 2026-02-06T11:37:54
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
Would it be possible to bump fsspec upper bound to the latest (2026.2.0)?
I saw you had some API compat issues in the past (https://github.com/huggingface/datasets/issues/7326) and I understand the need for an upper bound.
But I wonder if you think your CI and tests are a good proxy to catch fsspec API breakage? If that's the case then triggering dataset CI with the latest version of fsspec should tell us whether it's all good right?
Happy to open a PR if needed.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7994/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7993
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7993/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7993/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7993/events
|
https://github.com/huggingface/datasets/pull/7993
| 3,898,606,021
|
PR_kwDODunzps7Bkr3F
| 7,993
|
:sparkles: Add 'SparseCsv' builder and 'sparse_collate_fn' for efficient high-dimensional sparse data loading
|
{
"login": "Ebraheem1",
"id": 22107086,
"node_id": "MDQ6VXNlcjIyMTA3MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/22107086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ebraheem1",
"html_url": "https://github.com/Ebraheem1",
"followers_url": "https://api.github.com/users/Ebraheem1/followers",
"following_url": "https://api.github.com/users/Ebraheem1/following{/other_user}",
"gists_url": "https://api.github.com/users/Ebraheem1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ebraheem1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ebraheem1/subscriptions",
"organizations_url": "https://api.github.com/users/Ebraheem1/orgs",
"repos_url": "https://api.github.com/users/Ebraheem1/repos",
"events_url": "https://api.github.com/users/Ebraheem1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ebraheem1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-04T21:59:39
| 2026-02-04T22:00:48
| null |
NONE
| null | null | null | null |
This PR introduces a new dataset builder, SparseCsv, designed to handle "wide" tabular datasets (e.g., 100k+ columns common in transcriptomics, sparse NLP features, or recommender systems) that are typically too large to load into memory as dense Arrow tables.
It also adds a utility function, `sparse_collate_fn`, to seamlessly convert these sparse examples into `torch.sparse` or `scipy.sparse` matrices during training.
This PR should fix #7377
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7993/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7993",
"html_url": "https://github.com/huggingface/datasets/pull/7993",
"diff_url": "https://github.com/huggingface/datasets/pull/7993.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7993.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7992
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7992/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7992/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7992/events
|
https://github.com/huggingface/datasets/pull/7992
| 3,897,848,157
|
PR_kwDODunzps7BiJps
| 7,992
|
Add `IterableDataset.reshard()`
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7992). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-04T18:24:41
| 2026-02-04T18:55:38
| 2026-02-04T18:55:35
|
MEMBER
| null | null | null | null |
To increase the number of shards of a dataset, you can use [`IterableDataset.reshard`]:
```py
>>> dataset
IterableDataset({
features: ['label', 'title', 'content'],
num_shards: 4
})
>>> dataset.reshard()
IterableDataset({
features: ['label', 'title', 'content'],
num_shards: 3600
})
```
The resharding mechanism depends on the dataset file format.
For example for Parquet, it reshards using row groups instead of having one file per shard.
We can implement other formats later (e.g. JSON Lines, CSV can be split by recovering line boundaries from arbitrary locations)
Other details:
* fixed concatenate after shuffling, now it correctly shuffles the shards: close https://github.com/huggingface/datasets/issues/7196
* fixed interleave after split_by_node: close https://github.com/huggingface/datasets/issues/7868
* changed a bit how the effective seed works in multi node/worker/epoch situations
related to https://github.com/huggingface/datasets/issues/7917
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7992/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7992",
"html_url": "https://github.com/huggingface/datasets/pull/7992",
"diff_url": "https://github.com/huggingface/datasets/pull/7992.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7992.patch",
"merged_at": "2026-02-04T18:55:35"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7991
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7991/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7991/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7991/events
|
https://github.com/huggingface/datasets/issues/7991
| 3,896,884,513
|
I_kwDODunzps7oRb0h
| 7,991
|
list(api.list_datasets()) giving jsondecode error
|
{
"login": "Moll-j",
"id": 199609168,
"node_id": "U_kgDOC-XLUA",
"avatar_url": "https://avatars.githubusercontent.com/u/199609168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Moll-j",
"html_url": "https://github.com/Moll-j",
"followers_url": "https://api.github.com/users/Moll-j/followers",
"following_url": "https://api.github.com/users/Moll-j/following{/other_user}",
"gists_url": "https://api.github.com/users/Moll-j/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Moll-j/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moll-j/subscriptions",
"organizations_url": "https://api.github.com/users/Moll-j/orgs",
"repos_url": "https://api.github.com/users/Moll-j/repos",
"events_url": "https://api.github.com/users/Moll-j/events{/privacy}",
"received_events_url": "https://api.github.com/users/Moll-j/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2026-02-04T14:39:46
| 2026-02-05T10:30:09
| 2026-02-05T10:30:09
|
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
i am using the python api wrapper to list all datasets available on hugging face. This is for research, and i need all datasets to determine which % have language tags and other related questions requiring the total list. However, the following code that worked a few months ago:
```
from huggingface_hub import HfApi
api = HfApi(token=token)
datasets = list(api.list_datasets())
```
now gives a JSONDecodeError when reaching 2000 results. My understanding is that this is a pagination issue, as there is no cursor exposed to the API and so it doesnt know where to read from after the limit.
Is there any way to find all datasets available and combine them into a list?
|
{
"login": "Moll-j",
"id": 199609168,
"node_id": "U_kgDOC-XLUA",
"avatar_url": "https://avatars.githubusercontent.com/u/199609168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Moll-j",
"html_url": "https://github.com/Moll-j",
"followers_url": "https://api.github.com/users/Moll-j/followers",
"following_url": "https://api.github.com/users/Moll-j/following{/other_user}",
"gists_url": "https://api.github.com/users/Moll-j/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Moll-j/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moll-j/subscriptions",
"organizations_url": "https://api.github.com/users/Moll-j/orgs",
"repos_url": "https://api.github.com/users/Moll-j/repos",
"events_url": "https://api.github.com/users/Moll-j/events{/privacy}",
"received_events_url": "https://api.github.com/users/Moll-j/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7991/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7990
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7990/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7990/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7990/events
|
https://github.com/huggingface/datasets/issues/7990
| 3,895,870,826
|
I_kwDODunzps7oNkVq
| 7,990
|
Dataset.map crashes when first examples return None and later examples return dict — writer not initialized
|
{
"login": "meta-program",
"id": 30819640,
"node_id": "MDQ6VXNlcjMwODE5NjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/30819640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/meta-program",
"html_url": "https://github.com/meta-program",
"followers_url": "https://api.github.com/users/meta-program/followers",
"following_url": "https://api.github.com/users/meta-program/following{/other_user}",
"gists_url": "https://api.github.com/users/meta-program/gists{/gist_id}",
"starred_url": "https://api.github.com/users/meta-program/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/meta-program/subscriptions",
"organizations_url": "https://api.github.com/users/meta-program/orgs",
"repos_url": "https://api.github.com/users/meta-program/repos",
"events_url": "https://api.github.com/users/meta-program/events{/privacy}",
"received_events_url": "https://api.github.com/users/meta-program/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-04T10:43:20
| 2026-02-04T10:43:20
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Describe the bug
I detected a serious [bug from datasets/arrow_dataset.py](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L3676)
---
**Description of the bug**
`Dataset.map` crashes with `writer is None` when the map function returns `None` for the first few examples and a dictionary (or `pa.Table` / DataFrame) for later examples. This happens because the internal writer is initialized only when `i == 0` (or `i[0] == 0` in batched mode), but `update_data` is determined lazily after processing the first example/batch.
**Steps to reproduce**
```python
from datasets import Dataset
ds = Dataset.from_dict({"x": [1, 2, 3]})
def fn(example, idx):
if idx < 2:
return None
return {"x": [example["x"] * 10]}
list(ds.map(fn, with_indices=True))
```
**Expected behavior**
* The function should work regardless of when `update_data` becomes `True`.
* Writer should be initialized the first time a non-`None` return occurs, not tied to the first index.
**Environment info**
* `datasets` version: <insert your version>
* Python version: 3.12
* OS: <insert your OS>
**Suggested fix**
Replace `if i == 0` / `if i[0] == 0` checks with `if writer is None` when initializing the writer.
---
### Steps to reproduce the bug
Here's a ready-to-use version you can paste into that section:
---
### Steps to reproduce the bug
```python
from datasets import Dataset
# Create a minimal dataset
ds = Dataset.from_dict({"x": [1, 2, 3]})
# Define a map function that returns None for first examples, dict later
def fn(example, idx):
if idx < 2:
return None
return {"x": [example["x"] * 10]}
# Apply map with indices
list(ds.map(fn, with_indices=True))
```
**Expected:** function executes without errors.
**Observed:** crashes with `AttributeError: 'NoneType' object has no attribute 'write'` because the internal writer is not initialized when the first non-None return happens after i > 0.
---
This is minimal and clearly demonstrates the exact failure condition (`None` early, `dict` later).
### Expected behavior
---
**Expected behavior**
The `Dataset.map` function should handle map functions that return `None` for some examples and a dictionary (or `pa.Table` / DataFrame) for later examples. In this case, the internal writer should be initialized when the first non-`None` value is returned, so that the dataset can be updated without crashing. The code should run successfully for all examples and return the updated dataset.
---
### Environment info
- python3.12
- datasets==3.6.0 [but the latest version still has this problem]
- transformers==4.55.2
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7990/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7989
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7989/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7989/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7989/events
|
https://github.com/huggingface/datasets/pull/7989
| 3,895,613,949
|
PR_kwDODunzps7BaxDx
| 7,989
|
Remove pre-release workaround in CI for `transformers v5` and `huggingface_hub v1`
|
{
"login": "hanouticelina",
"id": 36770234,
"node_id": "MDQ6VXNlcjM2NzcwMjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/36770234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hanouticelina",
"html_url": "https://github.com/hanouticelina",
"followers_url": "https://api.github.com/users/hanouticelina/followers",
"following_url": "https://api.github.com/users/hanouticelina/following{/other_user}",
"gists_url": "https://api.github.com/users/hanouticelina/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hanouticelina/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hanouticelina/subscriptions",
"organizations_url": "https://api.github.com/users/hanouticelina/orgs",
"repos_url": "https://api.github.com/users/hanouticelina/repos",
"events_url": "https://api.github.com/users/hanouticelina/events{/privacy}",
"received_events_url": "https://api.github.com/users/hanouticelina/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7989). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-04T09:42:49
| 2026-02-04T15:20:04
| 2026-02-04T15:20:02
|
CONTRIBUTOR
| null | null | null | null |
This PR removes workaround for pre-release `transformers v5.*` / `huggingface_hub v1.*` in `test_py314_future` job since they are now officially released.
cc @Wauplin just for viz since you introduced this in https://github.com/huggingface/datasets/pull/7783.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7989/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7989",
"html_url": "https://github.com/huggingface/datasets/pull/7989",
"diff_url": "https://github.com/huggingface/datasets/pull/7989.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7989.patch",
"merged_at": "2026-02-04T15:20:02"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7988
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7988/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7988/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7988/events
|
https://github.com/huggingface/datasets/issues/7988
| 3,895,353,435
|
I_kwDODunzps7oLmBb
| 7,988
|
`Dataset.map()` breaks when `function` calls `import polars as pl` and `num_proc`>1: "UnboundLocalError: cannot access local variable 'pl' where it is not associated with a value"
|
{
"login": "ligz08",
"id": 7464471,
"node_id": "MDQ6VXNlcjc0NjQ0NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7464471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ligz08",
"html_url": "https://github.com/ligz08",
"followers_url": "https://api.github.com/users/ligz08/followers",
"following_url": "https://api.github.com/users/ligz08/following{/other_user}",
"gists_url": "https://api.github.com/users/ligz08/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ligz08/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ligz08/subscriptions",
"organizations_url": "https://api.github.com/users/ligz08/orgs",
"repos_url": "https://api.github.com/users/ligz08/repos",
"events_url": "https://api.github.com/users/ligz08/events{/privacy}",
"received_events_url": "https://api.github.com/users/ligz08/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-04T08:42:23
| 2026-02-04T08:42:23
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Describe the bug
# Repro
These two conditions seem to consistently reproduce the issue:
- function passed to `Dataset.map()` explicitly or implicitly calls `import polars as pl`
- `num_proc` > 1
# Trace
```
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "c:\Users\{redacted}\.venv\Lib\site-packages\multiprocess\pool.py", line 125, in worker
result = (True, func(*args, **kwds))
^^^^^^^^^^^^^^^^^^^
File "c:\Users\{redacted}\.venv\Lib\site-packages\datasets\utils\py_utils.py", line 586, in _write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\{redacted}\.venv\Lib\site-packages\datasets\arrow_dataset.py", line 3687, in _map_single
and isinstance(example, pl.DataFrame)
^^
UnboundLocalError: cannot access local variable 'pl' where it is not associated with a value
"""
The above exception was the direct cause of the following exception:
UnboundLocalError Traceback (most recent call last)
Cell In[2], [line 9](vscode-notebook-cell:?execution_count=2&line=9)
6 import polars as pl
7 return {'squared': sample['n'] ** 2}
----> [9](vscode-notebook-cell:?execution_count=2&line=9) ds.map(square, num_proc=2)
File c:\Users\{redacted}\.venv\Lib\site-packages\datasets\arrow_dataset.py:562, in transmit_format.<locals>.wrapper(*args, **kwargs)
555 self_format = {
556 "type": self._format_type,
557 "format_kwargs": self._format_kwargs,
558 "columns": self._format_columns,
559 "output_all_columns": self._output_all_columns,
560 }
561 # apply actual function
--> [562](file:///C:/Users/{redacted}/.venv/Lib/site-packages/datasets/arrow_dataset.py:562) out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
563 datasets: list["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
564 # re-apply format to the output
File c:\Users\{redacted}\.venv\Lib\site-packages\datasets\arrow_dataset.py:3334, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc, try_original_type)
3331 os.environ = prev_env
3332 logger.info(f"Spawning {num_proc} processes")
-> [3334](file:///C:/Users/{redacted}/.venv/Lib/site-packages/datasets/arrow_dataset.py:3334) for rank, done, content in iflatmap_unordered(
3335 pool, Dataset._map_single, kwargs_iterable=unprocessed_kwargs_per_job
3336 ):
3337 check_if_shard_done(rank, done, content)
3339 pool.close()
File c:\Users\{redacted}\.venv\Lib\site-packages\datasets\utils\py_utils.py:626, in iflatmap_unordered(pool, func, kwargs_iterable)
623 finally:
624 if not pool_changed:
625 # we get the result in case there's an error to raise
--> [626](file:///C:/Users/{redacted}/.venv/Lib/site-packages/datasets/utils/py_utils.py:626) [async_result.get(timeout=0.05) for async_result in async_results]
File c:\Users\{redacted}\.venv\Lib\site-packages\multiprocess\pool.py:774, in ApplyResult.get(self, timeout)
772 return self._value
773 else:
--> [774](file:///C:/Users/{redacted}/.venv/Lib/site-packages/multiprocess/pool.py:774) raise self._value
UnboundLocalError: cannot access local variable 'pl' where it is not associated with a value
```
# Why `import polars` in a worker function?
To my knowledge `Dataset.map()` doesn't support a worker init function, and objects useful inside a function aren't always pickable, so I commonly use this pattern to essentially construct the unpickable object in a worker process:
```python
def func(example, **kwargs):
if 'my_unpickable_object' not in globals():
from my_module import MyClass
my_unpickable_object = MyClass(**kwargs)
return {'newcol': my_unpickable_object.calculate_something(example['n'])}
ds = Dataset.load_from_disk(...)
ds.map(func, num_proc=2, ...)
```
and here `from my_module import MyClass` may implicitly call `import polars as pl` e.g. when `my_module.py` has that line, or when it imports some other module containing `import polars as pl`.
# A workaround
Things seem to work ok if I don't do any import inside `func`, but instead pass a class already imported outside `func` as a contructor, like below. Although I'm unsure how many scenarios this workaround can cover.
```python
from my_module import MyClass
def func(example, constructor=MyClass, **kwargs):
if 'my_unpickable_object' not in globals():
my_unpickable_object = constructor(**kwargs)
return {'newcol': my_unpickable_object.calculate_something(example['n'])}
ds = Dataset.load_from_disk(...)
ds.map(func, num_proc=2, ...)
```
# My speculations
Before the crash point, on line 3656 of `arrow_dataset.py` it reads:
```python
if config.POLARS_AVAILABLE and "polars" in sys.modules:
import polars as pl
````
My guess is these conditions are somehow messed up in a worker process, and it ended up not entering the `if` block.
### Steps to reproduce the bug
```python
from datasets import Dataset
ds = Dataset.from_dict({'n': list(range(10_000))})
def square(sample):
import polars as pl
return {'squared': sample['n'] ** 2}
ds.map(square, num_proc=2)
```
### Expected behavior
```
Dataset({
features: ['n', 'squared'],
num_rows: 10000
})
```
### Environment info
- `datasets` version: 4.5.0
- Platform: Windows-11-10.0.26200-SP0
- Python version: 3.12.10
- `huggingface_hub` version: 0.31.2
- PyArrow version: 21.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7988/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7987
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7987/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7987/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7987/events
|
https://github.com/huggingface/datasets/pull/7987
| 3,894,713,494
|
PR_kwDODunzps7BX0pY
| 7,987
|
Fix index out of bound error with original_shard_lengths.
|
{
"login": "jonathanasdf",
"id": 511073,
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanasdf",
"html_url": "https://github.com/jonathanasdf",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-04T05:20:43
| 2026-02-04T05:20:43
| null |
NONE
| null | null | null | null |
I have gotten the following error
```
original_shard_lengths[original_shard_id] += 1
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^
IndexError: list index out of range
```
Not sure what causes it, but this fixes the error. This may not be the proper fix for the root cause though.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7987/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7987",
"html_url": "https://github.com/huggingface/datasets/pull/7987",
"diff_url": "https://github.com/huggingface/datasets/pull/7987.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7987.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7986
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7986/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7986/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7986/events
|
https://github.com/huggingface/datasets/issues/7986
| 3,892,776,651
|
I_kwDODunzps7oBw7L
| 7,986
|
`Dataset.map()` causes cache miss/fingerprint change when closure captures self containing non-deterministic state.
|
{
"login": "Cloud0310",
"id": 60375730,
"node_id": "MDQ6VXNlcjYwMzc1NzMw",
"avatar_url": "https://avatars.githubusercontent.com/u/60375730?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cloud0310",
"html_url": "https://github.com/Cloud0310",
"followers_url": "https://api.github.com/users/Cloud0310/followers",
"following_url": "https://api.github.com/users/Cloud0310/following{/other_user}",
"gists_url": "https://api.github.com/users/Cloud0310/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cloud0310/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cloud0310/subscriptions",
"organizations_url": "https://api.github.com/users/Cloud0310/orgs",
"repos_url": "https://api.github.com/users/Cloud0310/repos",
"events_url": "https://api.github.com/users/Cloud0310/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cloud0310/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"I suggest metion this in docs specifically for attention with use, tell users explicitly to pass arguments with `fn_kwargs` param or using `functools.partial` to create a pure funcion."
] | 2026-02-03T19:16:49
| 2026-02-06T08:38:13
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Describe the bug
When using `.map()` with a function defined inside a **class (of which has any non-deterministic states)** method (a closure), if that function captures `self` to access a configuration variable (e.g., self.foo), the fingerprint mechanism serializes the entire class instance state.
If the class instance contains any non-deterministic state (such as random seeds, loggers, or distinct object IDs—in my case, PyTorch Lightning's `LightningDataModule`), the fingerprint changes on every run, rendering the cache useless.
While this may be intended behavior for `dill`, it is a significant "gotcha" for users migrating code into classes, as unrelated state changes cause massive re-processing overhead.
Real world "cache explosion" screenshot caused by the fingerprint mismatch:
<img width="942" height="382" alt="Image" src="https://github.com/user-attachments/assets/2fb0acba-ac07-4f00-bf30-c1ac932c9072" />
### Steps to reproduce the bug
Minimal reproduction code block:
```python3
import datasets
import uuid
# Prevent logging spam
datasets.logging.set_verbosity_error()
class ReproduceIssue:
def __init__(self):
# This is the variable we actually care about in the map function
self.foo = 32
# This simulates "dirty" internal state often found in framework classes
# (e.g., unique IDs, pointers to loggers, thread locks, or random seeds)
self.hidden_state = uuid.uuid4()
self.dataset = datasets.Dataset.from_dict({"strokes": [1, 2, 3]})
def setup(self):
# Closure captures 'self' to access 'self.foo'
def preprocess(batch):
# Accessing self binds the function to the specific instance state
_ = self.foo
return {"foo": batch["bar"]}
return self.dataset.map(preprocess, batched=True)
print("--- Run 1 ---")
inst1 = ReproduceIssue()
ds1 = inst1.setup()
print(f"Fingerprint 1: {ds1._fingerprint}")
print("\n--- Run 2 (New Instance) ---")
inst2 = ReproduceIssue()
ds2 = inst2.setup()
print(f"Fingerprint 2: {ds2._fingerprint}")
if ds1._fingerprint != ds2._fingerprint:
print("\n❌ ISSUE REPRODUCED: Fingerprints differ (Cache Miss).")
else:
print("\n✅ Fingerprints match.")
```
Result:
```
--- Run 1 ---
Mapping: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 2025.26 examples/s]
Fingerprint 1: 1ce6104f9e97912a
--- Run 2 (New Instance) ---
Mapping: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 2300.77 examples/s]
Fingerprint 2: c0fc011ff86ea571
--- Result ---
❌ CACHE MISS: Fingerprints are different!
```
### Expected behavior
The fingerprint should ideally depend **only on the bytecode of the function and the values of the variables actually accessed** (`self.foo`), rather than the state of the whole object self.
### Environment info
datasets version: 4.5.0, platform: any, python version: 3.13.
This was encountered while subclassing torch lightning's `LightningDataModule`. These objects inherently **contain internal state that differs per instance**.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7986/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7985
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7985/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7985/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7985/events
|
https://github.com/huggingface/datasets/pull/7985
| 3,892,480,150
|
PR_kwDODunzps7BQaGn
| 7,985
|
Remove unused data files optims
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7985). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-03T17:58:30
| 2026-02-03T18:30:30
| 2026-02-03T18:30:28
|
MEMBER
| null | null | null | null |
this fixes module inference when there are many metadata files
e.g. the lance dataset at https://huggingface.co/datasets/davanstrien/encyclopaedia-britannica-lance has > 200 metadata files
those optims are not used anymore, they come from a time we were dealing with slow data files iterators instead of lists
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7985/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7985",
"html_url": "https://github.com/huggingface/datasets/pull/7985",
"diff_url": "https://github.com/huggingface/datasets/pull/7985.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7985.patch",
"merged_at": "2026-02-03T18:30:28"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7984
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7984/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7984/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7984/events
|
https://github.com/huggingface/datasets/issues/7984
| 3,891,431,105
|
I_kwDODunzps7n8obB
| 7,984
|
Data
|
{
"login": "iLenceJhay",
"id": 228845628,
"node_id": "U_kgDODaPoPA",
"avatar_url": "https://avatars.githubusercontent.com/u/228845628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iLenceJhay",
"html_url": "https://github.com/iLenceJhay",
"followers_url": "https://api.github.com/users/iLenceJhay/followers",
"following_url": "https://api.github.com/users/iLenceJhay/following{/other_user}",
"gists_url": "https://api.github.com/users/iLenceJhay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iLenceJhay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iLenceJhay/subscriptions",
"organizations_url": "https://api.github.com/users/iLenceJhay/orgs",
"repos_url": "https://api.github.com/users/iLenceJhay/repos",
"events_url": "https://api.github.com/users/iLenceJhay/events{/privacy}",
"received_events_url": "https://api.github.com/users/iLenceJhay/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-03T14:01:48
| 2026-02-03T14:01:48
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| null | null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7984/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7983
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7983/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7983/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7983/events
|
https://github.com/huggingface/datasets/pull/7983
| 3,888,225,779
|
PR_kwDODunzps7BCJgV
| 7,983
|
Add Zarr streaming support (POC)
|
{
"login": "KOKOSde",
"id": 163377666,
"node_id": "U_kgDOCbzyAg",
"avatar_url": "https://avatars.githubusercontent.com/u/163377666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KOKOSde",
"html_url": "https://github.com/KOKOSde",
"followers_url": "https://api.github.com/users/KOKOSde/followers",
"following_url": "https://api.github.com/users/KOKOSde/following{/other_user}",
"gists_url": "https://api.github.com/users/KOKOSde/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KOKOSde/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KOKOSde/subscriptions",
"organizations_url": "https://api.github.com/users/KOKOSde/orgs",
"repos_url": "https://api.github.com/users/KOKOSde/repos",
"events_url": "https://api.github.com/users/KOKOSde/events{/privacy}",
"received_events_url": "https://api.github.com/users/KOKOSde/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-03T00:06:46
| 2026-02-04T00:09:20
| null |
NONE
| null | null | null | null |
Add initial Zarr streaming support (POC).
This introduces a `zarr` packaged module and docs/tests to validate basic loading.
Note: I pushed a follow-up commit to fix an accidental duplication in `benchmarks/benchmark_zarr_streaming.py` (file now contains a single benchmark script).
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7983/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7983",
"html_url": "https://github.com/huggingface/datasets/pull/7983",
"diff_url": "https://github.com/huggingface/datasets/pull/7983.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7983.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7982
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7982/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7982/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7982/events
|
https://github.com/huggingface/datasets/pull/7982
| 3,888,131,856
|
PR_kwDODunzps7BB1zZ
| 7,982
|
Fix unstable tokenizer fingerprinting (enables map cache reuse)
|
{
"login": "KOKOSde",
"id": 163377666,
"node_id": "U_kgDOCbzyAg",
"avatar_url": "https://avatars.githubusercontent.com/u/163377666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KOKOSde",
"html_url": "https://github.com/KOKOSde",
"followers_url": "https://api.github.com/users/KOKOSde/followers",
"following_url": "https://api.github.com/users/KOKOSde/following{/other_user}",
"gists_url": "https://api.github.com/users/KOKOSde/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KOKOSde/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KOKOSde/subscriptions",
"organizations_url": "https://api.github.com/users/KOKOSde/orgs",
"repos_url": "https://api.github.com/users/KOKOSde/repos",
"events_url": "https://api.github.com/users/KOKOSde/events{/privacy}",
"received_events_url": "https://api.github.com/users/KOKOSde/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-02-02T23:34:51
| 2026-02-05T05:41:24
| null |
NONE
| null | null | null | null |
Fix unstable dataset fingerprinting when hashing `PreTrainedTokenizerFast`.
Some tokenizers backed by `tokenizers.Tokenizer` mutate runtime settings (padding/truncation) when called, which can change the serialized state and make dataset fingerprints unstable. That prevents `.map(load_from_cache_file=True)` from reusing cache files.
Fix: when hashing, temporarily disable backend padding/truncation so runtime settings don’t affect the fingerprint, then restore the original settings.
Includes a regression test showing `Hasher.hash(tokenizer)` stays stable after calling the tokenizer.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7982/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7982",
"html_url": "https://github.com/huggingface/datasets/pull/7982",
"diff_url": "https://github.com/huggingface/datasets/pull/7982.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7982.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7981
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7981/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7981/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7981/events
|
https://github.com/huggingface/datasets/pull/7981
| 3,887,077,016
|
PR_kwDODunzps7A-V7J
| 7,981
|
Support pandas 3
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7981). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-02T17:16:37
| 2026-02-02T17:34:25
| 2026-02-02T17:34:22
|
MEMBER
| null | null | null | null | null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7981/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7981",
"html_url": "https://github.com/huggingface/datasets/pull/7981",
"diff_url": "https://github.com/huggingface/datasets/pull/7981.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7981.patch",
"merged_at": "2026-02-02T17:34:22"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7980
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7980/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7980/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7980/events
|
https://github.com/huggingface/datasets/pull/7980
| 3,886,785,042
|
PR_kwDODunzps7A9Wrj
| 7,980
|
Drop python 3.9
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7980). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-02T16:13:04
| 2026-02-02T16:26:31
| 2026-02-02T16:26:29
|
MEMBER
| null | null | null | null |
EOL was a few months ago, and transformers doesn't support 3.9 anymore
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7980/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7980",
"html_url": "https://github.com/huggingface/datasets/pull/7980",
"diff_url": "https://github.com/huggingface/datasets/pull/7980.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7980.patch",
"merged_at": "2026-02-02T16:26:29"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7979
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7979/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7979/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7979/events
|
https://github.com/huggingface/datasets/pull/7979
| 3,886,772,007
|
PR_kwDODunzps7A9T06
| 7,979
|
Use temp files in push_to_hub to save memory
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7979). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-02-02T16:10:38
| 2026-02-02T16:26:16
| 2026-02-02T16:26:14
|
MEMBER
| null | null | null | null |
write parquet data to temp files on disk prior to upload to save memory
this is enabled for for datasets loaded using streaming=True/False
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7979/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7979",
"html_url": "https://github.com/huggingface/datasets/pull/7979",
"diff_url": "https://github.com/huggingface/datasets/pull/7979.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7979.patch",
"merged_at": "2026-02-02T16:26:14"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7978
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7978/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7978/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7978/events
|
https://github.com/huggingface/datasets/pull/7978
| 3,879,787,436
|
PR_kwDODunzps7AmQfP
| 7,978
|
Fix 4910 kwargs
|
{
"login": "vedanta777",
"id": 218264809,
"node_id": "U_kgDODQJ06Q",
"avatar_url": "https://avatars.githubusercontent.com/u/218264809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vedanta777",
"html_url": "https://github.com/vedanta777",
"followers_url": "https://api.github.com/users/vedanta777/followers",
"following_url": "https://api.github.com/users/vedanta777/following{/other_user}",
"gists_url": "https://api.github.com/users/vedanta777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vedanta777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vedanta777/subscriptions",
"organizations_url": "https://api.github.com/users/vedanta777/orgs",
"repos_url": "https://api.github.com/users/vedanta777/repos",
"events_url": "https://api.github.com/users/vedanta777/events{/privacy}",
"received_events_url": "https://api.github.com/users/vedanta777/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-31T18:36:32
| 2026-02-02T13:08:33
| null |
NONE
| null | null | null | null |
Fix #4910 : Merge duplicate kwargs in `load_dataset_builder()`
Problem: load_dataset("dataset", base_path="./data")` gives TypeError("multiple values for keyword 'base_path')
Fix: {**builder_kwargs, **config_kwargs} to user kwargs override dataset defaults
Repro : python
Before: TypeError
load_dataset("rotten_tomatoes", base_path="./sample_data")
After: Works
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7978/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7978",
"html_url": "https://github.com/huggingface/datasets/pull/7978",
"diff_url": "https://github.com/huggingface/datasets/pull/7978.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7978.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7977
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7977/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7977/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7977/events
|
https://github.com/huggingface/datasets/pull/7977
| 3,879,142,697
|
PR_kwDODunzps7AkMoM
| 7,977
|
Updated get_dataset_config_names returning default in offline mode
|
{
"login": "abigailtech",
"id": 178829649,
"node_id": "U_kgDOCqi5UQ",
"avatar_url": "https://avatars.githubusercontent.com/u/178829649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abigailtech",
"html_url": "https://github.com/abigailtech",
"followers_url": "https://api.github.com/users/abigailtech/followers",
"following_url": "https://api.github.com/users/abigailtech/following{/other_user}",
"gists_url": "https://api.github.com/users/abigailtech/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abigailtech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abigailtech/subscriptions",
"organizations_url": "https://api.github.com/users/abigailtech/orgs",
"repos_url": "https://api.github.com/users/abigailtech/repos",
"events_url": "https://api.github.com/users/abigailtech/events{/privacy}",
"received_events_url": "https://api.github.com/users/abigailtech/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2026-01-31T12:56:21
| 2026-02-01T07:25:33
| 2026-02-01T07:25:33
|
NONE
| null | null | null | null |
When a dataset is cached and accessed in offline mode, get_dataset_config_names was returning default instead of the actual cached config names. This happened because CachedDatasetModuleFactory.get_module returned a DatasetModule without builder_configs_parameters, causing the fallback to default in get_dataset_config_names.
The fix reads config_name from each dataset_info file in the cache directory and includes them as builder_configs_parameters in the returned DatasetModule. Invalid or missing dataset_info.json files are handled.
**Testing:**
1. Download a dataset in online mode so it gets cached
2. Switch to offline mode and call get_dataset_config_names
3. Verify it returns the cached config names instead of ['default']
**Example:**
- HF_DATASETS_OFFLINE=0 HF_HOME="/tmp/hftemp" python -c "import datasets; datasets.load_dataset('cais/mmlu', 'all')"
- HF_DATASETS_OFFLINE=1 HF_HOME="/tmp/hftemp" python -c "import datasets; print(datasets.get_dataset_config_names('cais/mmlu'))"
- -> Expected output: ['all']
Fixes https://github.com/huggingface/datasets/issues/7947
|
{
"login": "abigailtech",
"id": 178829649,
"node_id": "U_kgDOCqi5UQ",
"avatar_url": "https://avatars.githubusercontent.com/u/178829649?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abigailtech",
"html_url": "https://github.com/abigailtech",
"followers_url": "https://api.github.com/users/abigailtech/followers",
"following_url": "https://api.github.com/users/abigailtech/following{/other_user}",
"gists_url": "https://api.github.com/users/abigailtech/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abigailtech/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abigailtech/subscriptions",
"organizations_url": "https://api.github.com/users/abigailtech/orgs",
"repos_url": "https://api.github.com/users/abigailtech/repos",
"events_url": "https://api.github.com/users/abigailtech/events{/privacy}",
"received_events_url": "https://api.github.com/users/abigailtech/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7977/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7977",
"html_url": "https://github.com/huggingface/datasets/pull/7977",
"diff_url": "https://github.com/huggingface/datasets/pull/7977.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7977.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7976
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7976/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7976/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7976/events
|
https://github.com/huggingface/datasets/pull/7976
| 3,879,038,987
|
PR_kwDODunzps7Aj2hP
| 7,976
|
Write image/audio/video blobs as is in parquet (PLAIN)
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7976). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-31T11:49:39
| 2026-02-03T20:03:48
| 2026-01-31T11:50:33
|
MEMBER
| null | null | null | null |
following #7971
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7976/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7976/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7976",
"html_url": "https://github.com/huggingface/datasets/pull/7976",
"diff_url": "https://github.com/huggingface/datasets/pull/7976.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7976.patch",
"merged_at": "2026-01-31T11:50:32"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7975
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7975/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7975/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7975/events
|
https://github.com/huggingface/datasets/pull/7975
| 3,878,625,407
|
PR_kwDODunzps7AikAO
| 7,975
|
Docs: add Dataset.from_dict example
|
{
"login": "KOKOSde",
"id": 163377666,
"node_id": "U_kgDOCbzyAg",
"avatar_url": "https://avatars.githubusercontent.com/u/163377666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KOKOSde",
"html_url": "https://github.com/KOKOSde",
"followers_url": "https://api.github.com/users/KOKOSde/followers",
"following_url": "https://api.github.com/users/KOKOSde/following{/other_user}",
"gists_url": "https://api.github.com/users/KOKOSde/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KOKOSde/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KOKOSde/subscriptions",
"organizations_url": "https://api.github.com/users/KOKOSde/orgs",
"repos_url": "https://api.github.com/users/KOKOSde/repos",
"events_url": "https://api.github.com/users/KOKOSde/events{/privacy}",
"received_events_url": "https://api.github.com/users/KOKOSde/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-31T07:00:43
| 2026-02-05T05:50:11
| null |
NONE
| null | null | null | null |
Docs: add a minimal `Dataset.from_dict` example.
This helps new users discover the most direct way to build a small dataset from in-memory Python data.
Docs-only change.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7975/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7975",
"html_url": "https://github.com/huggingface/datasets/pull/7975",
"diff_url": "https://github.com/huggingface/datasets/pull/7975.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7975.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7974
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7974/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7974/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7974/events
|
https://github.com/huggingface/datasets/pull/7974
| 3,878,625,349
|
PR_kwDODunzps7Aij_g
| 7,974
|
Fix duplicate kwargs in load_dataset_builder
|
{
"login": "KOKOSde",
"id": 163377666,
"node_id": "U_kgDOCbzyAg",
"avatar_url": "https://avatars.githubusercontent.com/u/163377666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KOKOSde",
"html_url": "https://github.com/KOKOSde",
"followers_url": "https://api.github.com/users/KOKOSde/followers",
"following_url": "https://api.github.com/users/KOKOSde/following{/other_user}",
"gists_url": "https://api.github.com/users/KOKOSde/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KOKOSde/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KOKOSde/subscriptions",
"organizations_url": "https://api.github.com/users/KOKOSde/orgs",
"repos_url": "https://api.github.com/users/KOKOSde/repos",
"events_url": "https://api.github.com/users/KOKOSde/events{/privacy}",
"received_events_url": "https://api.github.com/users/KOKOSde/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-31T07:00:39
| 2026-02-05T05:49:31
| null |
NONE
| null | null | null | null |
Avoid passing duplicate keyword arguments to `load_dataset_builder`.
Some module factories provide values in `builder_kwargs` (e.g. `base_path`), and users can also pass the same keys via `config_kwargs`, which raises:
`TypeError: ... got multiple values for keyword argument ...`.
Fix: if `config_kwargs` is provided, drop overlapping keys from `builder_kwargs` (keep the user-provided values).
Includes a regression test.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7974/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7974",
"html_url": "https://github.com/huggingface/datasets/pull/7974",
"diff_url": "https://github.com/huggingface/datasets/pull/7974.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7974.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7973
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7973/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7973/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7973/events
|
https://github.com/huggingface/datasets/pull/7973
| 3,878,514,101
|
PR_kwDODunzps7AiMd8
| 7,973
|
Fix resolve_pattern for local symlinked files
|
{
"login": "KOKOSde",
"id": 163377666,
"node_id": "U_kgDOCbzyAg",
"avatar_url": "https://avatars.githubusercontent.com/u/163377666?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KOKOSde",
"html_url": "https://github.com/KOKOSde",
"followers_url": "https://api.github.com/users/KOKOSde/followers",
"following_url": "https://api.github.com/users/KOKOSde/following{/other_user}",
"gists_url": "https://api.github.com/users/KOKOSde/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KOKOSde/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KOKOSde/subscriptions",
"organizations_url": "https://api.github.com/users/KOKOSde/orgs",
"repos_url": "https://api.github.com/users/KOKOSde/repos",
"events_url": "https://api.github.com/users/KOKOSde/events{/privacy}",
"received_events_url": "https://api.github.com/users/KOKOSde/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-31T06:04:51
| 2026-02-05T05:49:13
| null |
NONE
| null | null | null | null |
Fix `resolve_pattern` for *local symlinked files*.
Problem: on the local `file://` filesystem, `fsspec` can report symlinks as `type=="other"` and omit the `islink` flag, so symlinked files are skipped.
Fix: when `protocol=="file"`, treat `os.path.islink(filepath)` as a link candidate and include it if it resolves to a regular file.
Includes a regression test in `tests/test_data_files.py`.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7973/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7973",
"html_url": "https://github.com/huggingface/datasets/pull/7973",
"diff_url": "https://github.com/huggingface/datasets/pull/7973.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7973.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7972
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7972/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7972/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7972/events
|
https://github.com/huggingface/datasets/pull/7972
| 3,874,083,781
|
PR_kwDODunzps7AT07I
| 7,972
|
feat: implement iter_arrow for skip, take and step iterables
|
{
"login": "Edge-Explorer",
"id": 192764477,
"node_id": "U_kgDOC31aPQ",
"avatar_url": "https://avatars.githubusercontent.com/u/192764477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Edge-Explorer",
"html_url": "https://github.com/Edge-Explorer",
"followers_url": "https://api.github.com/users/Edge-Explorer/followers",
"following_url": "https://api.github.com/users/Edge-Explorer/following{/other_user}",
"gists_url": "https://api.github.com/users/Edge-Explorer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Edge-Explorer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Edge-Explorer/subscriptions",
"organizations_url": "https://api.github.com/users/Edge-Explorer/orgs",
"repos_url": "https://api.github.com/users/Edge-Explorer/repos",
"events_url": "https://api.github.com/users/Edge-Explorer/events{/privacy}",
"received_events_url": "https://api.github.com/users/Edge-Explorer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-30T05:47:13
| 2026-01-30T05:54:15
| null |
CONTRIBUTOR
| null | null | null | null |
This commit optimizes streaming operations by implementing [_iter_arrow](cci:1://file:///c:/Users/ASUS/Desktop/datasets/src/datasets/iterable_dataset.py:377:4-391:57) for [SkipExamplesIterable](cci:2://file:///c:/Users/ASUS/Desktop/datasets/src/datasets/iterable_dataset.py:1798:0-1892:42), [TakeExamplesIterable](cci:2://file:///c:/Users/ASUS/Desktop/datasets/src/datasets/iterable_dataset.py:1944:0-2048:42), and [StepExamplesIterable](cci:2://file:///c:/Users/ASUS/Desktop/datasets/src/datasets/iterable_dataset.py:667:0-743:42).
### Key Changes:
- **Fast Batch Processing**: Enabled batch-level slicing for `.skip(n)` and `.take(n)` on streaming datasets, bypassing slow row-by-row iteration.
- **Optimized Sharding**: Updated [StepExamplesIterable](cci:2://file:///c:/Users/ASUS/Desktop/datasets/src/datasets/iterable_dataset.py:667:0-743:42) (used in distributed training) to use Arrow's `.take()` to extract multiple records from a batch simultaneously.
- **State Preservation**: Reinforced [_init_state_dict](cci:1://file:///c:/Users/ASUS/Desktop/datasets/src/datasets/iterable_dataset.py:505:4-514:31) and [load_state_dict](cci:1://file:///c:/Users/ASUS/Desktop/datasets/src/datasets/iterable_dataset.py:2382:4-2430:46) to support flawless checkpointing and resumption while using Arrow iteration.
### Performance Impact:
Users will experience significant performance gains when skipping or taking examples in streaming mode. By staying in the "Arrow path" and avoiding Python dictionary conversions, data loading overhead is drastically reduced, especially for large-scale training jobs.
### Testing:
Integrated 6 new unit tests into [tests/test_iterable_dataset.py](cci:7://file:///c:/Users/ASUS/Desktop/datasets/tests/test_iterable_dataset.py:0:0-0:0) to verify:
- Functional correctness for [skip](cci:1://file:///c:/Users/ASUS/Desktop/datasets/src/datasets/iterable_dataset.py:3149:4-3191:9), [take](cci:1://file:///c:/Users/ASUS/Desktop/datasets/src/datasets/iterable_dataset.py:3236:4-3271:9), and `step` using Arrow iteration.
- Reliable state checkpointing and resumption after partial iteration.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7972/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7972",
"html_url": "https://github.com/huggingface/datasets/pull/7972",
"diff_url": "https://github.com/huggingface/datasets/pull/7972.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7972.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7971
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7971/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7971/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7971/events
|
https://github.com/huggingface/datasets/pull/7971
| 3,871,984,311
|
PR_kwDODunzps7AMzLl
| 7,971
|
push_to_hub() for videos
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7971). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-29T18:16:58
| 2026-01-31T11:50:25
| 2026-01-29T18:56:04
|
MEMBER
| null | null | null | null |
possible now that row group sizes are auto-determined based on the content size after https://github.com/huggingface/datasets/pull/7589
Videos are uploaded as PLAIN in Parquet to make sure they can be seeked remotely and with random access to frames in https://github.com/huggingface/datasets/pull/7976
In the future it could be cool to have the same behavior as when videos are separate files, i.e. lazily load them instead of downloading them completely in streaming mode.
Right now there is this discrepency:
- `load_dataset("username/my-folder-of-videos", streaming=True)` -> videos are lazy loaded one by one when iterating, and only actually downloaded when accessing frames in `torchcodec`
- `load_dataset("username/my-video-dataset-in-parquet", streaming=True)` -> videos are downloaded one by one when iterating, even if no frame is accessed in `torchcodec`
close https://github.com/huggingface/datasets/issues/7493
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7971/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7971",
"html_url": "https://github.com/huggingface/datasets/pull/7971",
"diff_url": "https://github.com/huggingface/datasets/pull/7971.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7971.patch",
"merged_at": "2026-01-29T18:56:04"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7970
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7970/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7970/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7970/events
|
https://github.com/huggingface/datasets/issues/7970
| 3,869,700,866
|
I_kwDODunzps7mpvMC
| 7,970
|
cast_column(..., Audio) fails with load_dataset("csv",)
|
{
"login": "jstangroome",
"id": 148754,
"node_id": "MDQ6VXNlcjE0ODc1NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/148754?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jstangroome",
"html_url": "https://github.com/jstangroome",
"followers_url": "https://api.github.com/users/jstangroome/followers",
"following_url": "https://api.github.com/users/jstangroome/following{/other_user}",
"gists_url": "https://api.github.com/users/jstangroome/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jstangroome/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jstangroome/subscriptions",
"organizations_url": "https://api.github.com/users/jstangroome/orgs",
"repos_url": "https://api.github.com/users/jstangroome/repos",
"events_url": "https://api.github.com/users/jstangroome/events{/privacy}",
"received_events_url": "https://api.github.com/users/jstangroome/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"The following code *does* work:\n```py\nfrom datasets import load_dataset,Audio,Features\n\ndataset = load_dataset(\"csv\",data_files=\"audio.csv\",features=Features({\"audio\": Audio()}))\nprint(dataset[\"train\"][0][\"audio\"])\n```",
"Thanks for reporing ! Are you using pandas v3 by any chance ? The CSV loader uses pandas and this release is brand new and might have caused a breaking change",
"pandas 3.0.0 was present but I've also reproduced the issue with pandas 2.3.3."
] | 2026-01-29T09:33:35
| 2026-01-29T22:24:14
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Describe the bug
Attempt to load a dataset from a csv with a single `audio` column with a single row with a path to an audio file fails when casting the column to Audio, but the exact same dataset created from a dictionary succeeds.
### Steps to reproduce the bug
1. Have any valid audio file `audio.wav`
2. Have a csv file named `audio.csv` with the following content:
```csv
"audio"
"audio.wav"
```
3. Attempt to execute the following python code:
```py
from datasets import load_dataset,Audio,Dataset
dataset = Dataset.from_dict({"audio": ["audio.wav"]})
dataset = dataset.cast_column("audio", Audio())
print(dataset[0]["audio"])
# ^^ succeeds with output: <datasets.features._torchcodec.AudioDecoder object at 0x7a32b341a3c0>
dataset = load_dataset("csv", data_files="audio.csv")
dataset = dataset.cast_column("audio", Audio())
# ^^ errors and terminates
print(dataset[0]["audio"])
```
The error is:
```pytb
Traceback (most recent call last):
File "~/datasets-bug/explore.py", line 8, in <module>
dataset = dataset.cast_column("audio", Audio(sampling_rate=24000))
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/dataset_dict.py", line 337, in cast_column
return DatasetDict({k: dataset.cast_column(column=column, feature=feature) for k, dataset in self.items()})
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/fingerprint.py", line 468, in wrapper
out = func(dataset, *args, **kwargs)
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/arrow_dataset.py", line 2201, in cast_column
dataset._data = dataset._data.cast(dataset.features.arrow_schema)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/table.py", line 1124, in cast
return MemoryMappedTable(table_cast(self.table, *args, **kwargs), self.path, replays)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/table.py", line 2224, in cast_table_to_schema
cast_array_to_feature(
~~~~~~~~~~~~~~~~~~~~~^
table[name] if name in table_column_names else pa.array([None] * len(table), type=schema.field(name).type),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
feature,
^^^^^^^^
)
^
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/table.py", line 1795, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/table.py", line 1995, in cast_array_to_feature
return feature.cast_storage(array)
~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/features/audio.py", line 272, in cast_storage
return array_cast(storage, self.pa_type)
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/table.py", line 1797, in wrapper
return func(array, *args, **kwargs)
File "~/datasets-bug/.venv/lib/python3.14/site-packages/datasets/table.py", line 1949, in array_cast
return array.cast(pa_type)
~~~~~~~~~~^^^^^^^^^
File "pyarrow/array.pxi", line 1147, in pyarrow.lib.Array.cast
File "~/datasets-bug/.venv/lib/python3.14/site-packages/pyarrow/compute.py", line 412, in cast
return call_function("cast", [arr], options, memory_pool)
File "pyarrow/_compute.pyx", line 604, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 399, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from large_string to struct using function cast_struct
```
### Expected behavior
The audio column with file paths loaded from a csv can be converted to AudioDecoder objects the same as an identical dataset created from a dict.
### Environment info
datasets 4.3.0 and 4.5.0, Ubuntu 24.04 amd64, python 3.13.11 and 3.14.2
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7970/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7969
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7969/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7969/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7969/events
|
https://github.com/huggingface/datasets/pull/7969
| 3,865,100,307
|
PR_kwDODunzps6_1p9H
| 7,969
|
Count examples in lance
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7969). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-28T12:00:37
| 2026-01-28T13:00:26
| 2026-01-28T13:00:23
|
MEMBER
| null | null | null | null |
```python
In [1]: from datasets import load_dataset_builder, StreamingDownloadManager
In [2]: b = load_dataset_builder("lance-format/openvid-lance")
Resolving data files: 100%|█| 240/240 [00:00<00:00, 42675.64it/s
In [3]: b.count_examples(StreamingDownloadManager())
Out[3]: {'train': 937957}
```
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7969/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7969",
"html_url": "https://github.com/huggingface/datasets/pull/7969",
"diff_url": "https://github.com/huggingface/datasets/pull/7969.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7969.patch",
"merged_at": "2026-01-28T13:00:23"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7968
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7968/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7968/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7968/events
|
https://github.com/huggingface/datasets/issues/7968
| 3,864,988,355
|
I_kwDODunzps7mXwrD
| 7,968
|
Potential conflicting type checks and dead code in `/src/datasets/table.py`
|
{
"login": "rc4typecheck",
"id": 243496043,
"node_id": "U_kgDODoN0aw",
"avatar_url": "https://avatars.githubusercontent.com/u/243496043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rc4typecheck",
"html_url": "https://github.com/rc4typecheck",
"followers_url": "https://api.github.com/users/rc4typecheck/followers",
"following_url": "https://api.github.com/users/rc4typecheck/following{/other_user}",
"gists_url": "https://api.github.com/users/rc4typecheck/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rc4typecheck/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rc4typecheck/subscriptions",
"organizations_url": "https://api.github.com/users/rc4typecheck/orgs",
"repos_url": "https://api.github.com/users/rc4typecheck/repos",
"events_url": "https://api.github.com/users/rc4typecheck/events{/privacy}",
"received_events_url": "https://api.github.com/users/rc4typecheck/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"ConcatenationTable is a subclass of datasets.table.Table but not pa.Table, so it should be fine"
] | 2026-01-28T11:34:53
| 2026-01-28T13:05:28
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
When statically analyzing and manually reviewing the code, I noticed a potential logic conflicting in `/src/datasets/table.py` as follows:
```
def to_blocks(table: Union[pa.Table, Table]) -> list[list[TableBlock]]:
if isinstance(table, pa.Table):
return [[InMemoryTable(table)]]
elif isinstance(table, ConcatenationTable): # dead code
return copy.deepcopy(table.blocks)
else:
return [[table]]
```
Within the function, the condition `isinstance(table, ConcatenationTable)` at line 4 will never be True because the previous condition `isinstance(table, pa.Table)` at line 2 would have already caught all instances of `ConcatenationTable` (since `ConcatenationTable` is a subtype of `pa.Table`). This creates a logical conflict in the type checking flow.
Please verify if this logic is intentional or it is an issue warranting a refactoring or fixing.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7968/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7967
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7967/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7967/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7967/events
|
https://github.com/huggingface/datasets/pull/7967
| 3,863,579,646
|
PR_kwDODunzps6_wnm8
| 7,967
|
Issue 7756 Fix - multiprocessing hang issue with start method check
|
{
"login": "vedanta777",
"id": 218264809,
"node_id": "U_kgDODQJ06Q",
"avatar_url": "https://avatars.githubusercontent.com/u/218264809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vedanta777",
"html_url": "https://github.com/vedanta777",
"followers_url": "https://api.github.com/users/vedanta777/followers",
"following_url": "https://api.github.com/users/vedanta777/following{/other_user}",
"gists_url": "https://api.github.com/users/vedanta777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vedanta777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vedanta777/subscriptions",
"organizations_url": "https://api.github.com/users/vedanta777/orgs",
"repos_url": "https://api.github.com/users/vedanta777/repos",
"events_url": "https://api.github.com/users/vedanta777/events{/privacy}",
"received_events_url": "https://api.github.com/users/vedanta777/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-28T05:02:20
| 2026-01-31T18:26:03
| null |
NONE
| null | null | null | null |
Added a fix to prevent multiprocessing hangs by checking the start method.
Detects bad multiprocessing start_method, fallback happens.
https://github.com/huggingface/datasets/issues/7756
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7967/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7967",
"html_url": "https://github.com/huggingface/datasets/pull/7967",
"diff_url": "https://github.com/huggingface/datasets/pull/7967.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7967.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7966
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7966/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7966/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7966/events
|
https://github.com/huggingface/datasets/pull/7966
| 3,861,774,379
|
PR_kwDODunzps6_qp_e
| 7,966
|
Infer types from lance blobs
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7966). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-27T18:00:25
| 2026-01-28T13:02:25
| 2026-01-28T13:02:23
|
MEMBER
| null | null | null | null |
Ex: infer Video() type in https://huggingface.co/datasets/lance-format/openvid-lance and Image() type in https://huggingface.co/datasets/lance-format/laion-1m
```python
from datasets import load_dataset
ds = load_dataset("lance-format/laion-1m", streaming=True, split="train")
print(ds.feature["image"])
# Image()
ds = load_dataset("lance-format/openvid-lance", streaming=True, split="train")
print(ds.feature["video_blob"])
# Video()
```
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7966/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7966",
"html_url": "https://github.com/huggingface/datasets/pull/7966",
"diff_url": "https://github.com/huggingface/datasets/pull/7966.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7966.patch",
"merged_at": "2026-01-28T13:02:22"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7965
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7965/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7965/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7965/events
|
https://github.com/huggingface/datasets/issues/7965
| 3,858,483,549
|
I_kwDODunzps7l-8ld
| 7,965
|
`huggingface_hub.errors.HfHubHTTPError: 404 Client Error: Not Found for url` when fetching a dataset with `datasets.load_dataset`
|
{
"login": "harupy",
"id": 17039389,
"node_id": "MDQ6VXNlcjE3MDM5Mzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/17039389?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harupy",
"html_url": "https://github.com/harupy",
"followers_url": "https://api.github.com/users/harupy/followers",
"following_url": "https://api.github.com/users/harupy/following{/other_user}",
"gists_url": "https://api.github.com/users/harupy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harupy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harupy/subscriptions",
"organizations_url": "https://api.github.com/users/harupy/orgs",
"repos_url": "https://api.github.com/users/harupy/repos",
"events_url": "https://api.github.com/users/harupy/events{/privacy}",
"received_events_url": "https://api.github.com/users/harupy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Yes you should use `cornell-movie-review-data/rotten_tomatoes` instead of `rotten_tomatoes`, which is the legacy name. Those datasets have been moved under their actual owners accounts some time ago (but we were keeping the old names as aliases)\n\nSome other impacted names are:\n- `imdb` -> `stanfordnlp/imdb`\n- `wikitext` -> `Salesforce/wikitext`\n- `gsm8k` -> `openai/gsm8k`\n- `winogrande` -> `allenai/winogrande`\n\nWe're working on re-enabling them as aliases for backward compatibility. I'll post updates here, sorry for the inconvenience.\n\n**Using the actual name instead of the old legacy name is more future proof though**",
"Thanks for the heads up @lhoestq ! fyi, this change is likely breaking a lot of repos that have legacy names hardcoded ([example](https://github.com/allenai/olmes/pull/40)) Would be helpful to many to share this update in a more visible way if it is likely to persist for a while.",
"[internal tracking link](https://github.com/huggingface-internal/moon-landing/pull/16539)",
"@lhoestq Thanks for clarifying!",
"The aliases are re-enabled :)",
"Thanks!",
"Can I close this issue?",
"Yep :)"
] | 2026-01-27T02:20:31
| 2026-01-28T15:14:50
| 2026-01-28T15:14:50
|
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
Not a bug but a question. We started getting the following error:
https://github.com/mlflow/mlflow/actions/runs/21368603305/job/61506951617
```
ests/data/test_huggingface_dataset_and_source.py::test_from_huggingface_dataset_constructs_expected_dataset_with_revision - huggingface_hub.errors.HfHubHTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/datasets/rotten_tomatoes/revision/aa13bc287fa6fcab6daf52f0dfb9994269ffea28 (Request ID: Root=1-6977aeca-35bc2b5b605884926a9224d0;aa2391f3-26e8-4975-b9bb-114b2fa40223)
```
Adding a user id fixed the issue (https://github.com/mlflow/mlflow/pull/20350). `https://huggingface.co/api/datasets` no longer accepts a name-only path like `rotten_tomatoes`? Just wondering what changed. Thanks!
|
{
"login": "harupy",
"id": 17039389,
"node_id": "MDQ6VXNlcjE3MDM5Mzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/17039389?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harupy",
"html_url": "https://github.com/harupy",
"followers_url": "https://api.github.com/users/harupy/followers",
"following_url": "https://api.github.com/users/harupy/following{/other_user}",
"gists_url": "https://api.github.com/users/harupy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harupy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harupy/subscriptions",
"organizations_url": "https://api.github.com/users/harupy/orgs",
"repos_url": "https://api.github.com/users/harupy/repos",
"events_url": "https://api.github.com/users/harupy/events{/privacy}",
"received_events_url": "https://api.github.com/users/harupy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7965/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7965/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7964
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7964/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7964/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7964/events
|
https://github.com/huggingface/datasets/pull/7964
| 3,858,025,706
|
PR_kwDODunzps6_eOZR
| 7,964
|
handle blob lance
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7964). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-26T22:56:24
| 2026-01-26T22:59:18
| 2026-01-26T22:56:38
|
MEMBER
| null | null | null | null |
following #7913
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7964/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7964",
"html_url": "https://github.com/huggingface/datasets/pull/7964",
"diff_url": "https://github.com/huggingface/datasets/pull/7964.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7964.patch",
"merged_at": "2026-01-26T22:56:38"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7963
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7963/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7963/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7963/events
|
https://github.com/huggingface/datasets/pull/7963
| 3,856,921,322
|
PR_kwDODunzps6_ajTD
| 7,963
|
Support null in json string cols
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7963). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-26T17:31:55
| 2026-01-26T17:48:46
| 2026-01-26T17:48:44
|
MEMBER
| null | null | null | null |
fix for https://huggingface.co/datasets/arcprize/arc_agi_v2_public_eval
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7963/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7963",
"html_url": "https://github.com/huggingface/datasets/pull/7963",
"diff_url": "https://github.com/huggingface/datasets/pull/7963.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7963.patch",
"merged_at": "2026-01-26T17:48:44"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7962
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7962/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7962/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7962/events
|
https://github.com/huggingface/datasets/pull/7962
| 3,856,811,005
|
PR_kwDODunzps6_aM71
| 7,962
|
Use Sequence instead of list in Dataset.from_parquet type hints
|
{
"login": "Mukundtimbadiya20",
"id": 142491113,
"node_id": "U_kgDOCH496Q",
"avatar_url": "https://avatars.githubusercontent.com/u/142491113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mukundtimbadiya20",
"html_url": "https://github.com/Mukundtimbadiya20",
"followers_url": "https://api.github.com/users/Mukundtimbadiya20/followers",
"following_url": "https://api.github.com/users/Mukundtimbadiya20/following{/other_user}",
"gists_url": "https://api.github.com/users/Mukundtimbadiya20/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mukundtimbadiya20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mukundtimbadiya20/subscriptions",
"organizations_url": "https://api.github.com/users/Mukundtimbadiya20/orgs",
"repos_url": "https://api.github.com/users/Mukundtimbadiya20/repos",
"events_url": "https://api.github.com/users/Mukundtimbadiya20/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mukundtimbadiya20/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Thank you for the review!\r\n\r\nI’ve updated the implementation to:\r\n- Use Sequence from collections.abc as per project conventions\r\n- Restore backward compatibility with Union[PathLike, Sequence[PathLike]]\r\n- Keep the columns annotation as Optional[Sequence[str]]\r\n\r\nThe fixes are pushed. Please let me know if anything else is needed.\r\n\r\n"
] | 2026-01-26T17:01:47
| 2026-02-04T06:47:52
| null |
NONE
| null | null | null | null |
This PR updates type annotations in Dataset.from_parquet to use Sequence instead of list to avoid mypy invariant type issues, as discussed in issue #5354.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7962/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7962",
"html_url": "https://github.com/huggingface/datasets/pull/7962",
"diff_url": "https://github.com/huggingface/datasets/pull/7962.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7962.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7961
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7961/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7961/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7961/events
|
https://github.com/huggingface/datasets/pull/7961
| 3,847,883,164
|
PR_kwDODunzps6-9GHz
| 7,961
|
Revert "feat: avoid some copies in torch formatter (#7787)"
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7961). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-23T15:13:01
| 2026-01-23T15:16:01
| 2026-01-23T15:15:33
|
MEMBER
| null | null | null | null |
This reverts commit c412a6f5a50955e141c5169bf7abe005d10228d2 (I assume it was ai generated which makes it hard for me to review and make sure it doesn't have bad edge cases, but lmk if it wasn't, anyways it didn't take into account the torch kwargs which are responsible for sending the data to the correct device)
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7961/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7961",
"html_url": "https://github.com/huggingface/datasets/pull/7961",
"diff_url": "https://github.com/huggingface/datasets/pull/7961.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7961.patch",
"merged_at": "2026-01-23T15:15:33"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7960
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7960/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7960/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7960/events
|
https://github.com/huggingface/datasets/pull/7960
| 3,847,601,199
|
PR_kwDODunzps6-8Iak
| 7,960
|
docs: fix grammar and add type hints in splits.py
|
{
"login": "Edge-Explorer",
"id": 192764477,
"node_id": "U_kgDOC31aPQ",
"avatar_url": "https://avatars.githubusercontent.com/u/192764477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Edge-Explorer",
"html_url": "https://github.com/Edge-Explorer",
"followers_url": "https://api.github.com/users/Edge-Explorer/followers",
"following_url": "https://api.github.com/users/Edge-Explorer/following{/other_user}",
"gists_url": "https://api.github.com/users/Edge-Explorer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Edge-Explorer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Edge-Explorer/subscriptions",
"organizations_url": "https://api.github.com/users/Edge-Explorer/orgs",
"repos_url": "https://api.github.com/users/Edge-Explorer/repos",
"events_url": "https://api.github.com/users/Edge-Explorer/events{/privacy}",
"received_events_url": "https://api.github.com/users/Edge-Explorer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2026-01-23T14:05:12
| 2026-01-30T05:14:00
| 2026-01-23T16:04:41
|
CONTRIBUTOR
| null | null | null | null |
This PR improves the documentation in
src/datasets/splits.py
by:
Fixing pluralization/grammar errors in docstrings (Lines 62, 73, 403).
Adding Python type hints to the
NamedSplit
constructor for better code quality.
Verified with ruff format and ruff check.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7960/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7960",
"html_url": "https://github.com/huggingface/datasets/pull/7960",
"diff_url": "https://github.com/huggingface/datasets/pull/7960.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7960.patch",
"merged_at": "2026-01-23T16:04:40"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7959
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7959/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7959/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7959/events
|
https://github.com/huggingface/datasets/pull/7959
| 3,847,579,785
|
PR_kwDODunzps6-8DsQ
| 7,959
|
docs: fix typo in arrow_dataset.py comment
|
{
"login": "Edge-Explorer",
"id": 192764477,
"node_id": "U_kgDOC31aPQ",
"avatar_url": "https://avatars.githubusercontent.com/u/192764477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Edge-Explorer",
"html_url": "https://github.com/Edge-Explorer",
"followers_url": "https://api.github.com/users/Edge-Explorer/followers",
"following_url": "https://api.github.com/users/Edge-Explorer/following{/other_user}",
"gists_url": "https://api.github.com/users/Edge-Explorer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Edge-Explorer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Edge-Explorer/subscriptions",
"organizations_url": "https://api.github.com/users/Edge-Explorer/orgs",
"repos_url": "https://api.github.com/users/Edge-Explorer/repos",
"events_url": "https://api.github.com/users/Edge-Explorer/events{/privacy}",
"received_events_url": "https://api.github.com/users/Edge-Explorer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2026-01-23T14:00:08
| 2026-01-23T14:14:57
| 2026-01-23T14:14:57
|
CONTRIBUTOR
| null | null | null | null | null |
{
"login": "Edge-Explorer",
"id": 192764477,
"node_id": "U_kgDOC31aPQ",
"avatar_url": "https://avatars.githubusercontent.com/u/192764477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Edge-Explorer",
"html_url": "https://github.com/Edge-Explorer",
"followers_url": "https://api.github.com/users/Edge-Explorer/followers",
"following_url": "https://api.github.com/users/Edge-Explorer/following{/other_user}",
"gists_url": "https://api.github.com/users/Edge-Explorer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Edge-Explorer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Edge-Explorer/subscriptions",
"organizations_url": "https://api.github.com/users/Edge-Explorer/orgs",
"repos_url": "https://api.github.com/users/Edge-Explorer/repos",
"events_url": "https://api.github.com/users/Edge-Explorer/events{/privacy}",
"received_events_url": "https://api.github.com/users/Edge-Explorer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7959/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7959",
"html_url": "https://github.com/huggingface/datasets/pull/7959",
"diff_url": "https://github.com/huggingface/datasets/pull/7959.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7959.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7958
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7958/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7958/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7958/events
|
https://github.com/huggingface/datasets/issues/7958
| 3,847,184,392
|
I_kwDODunzps7lT2AI
| 7,958
|
[CUDA Tensors Not working in ~v4.5.0] set_format(type="torch", device="cuda") returns cpu
|
{
"login": "ai-nikolai",
"id": 9797804,
"node_id": "MDQ6VXNlcjk3OTc4MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9797804?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ai-nikolai",
"html_url": "https://github.com/ai-nikolai",
"followers_url": "https://api.github.com/users/ai-nikolai/followers",
"following_url": "https://api.github.com/users/ai-nikolai/following{/other_user}",
"gists_url": "https://api.github.com/users/ai-nikolai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ai-nikolai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ai-nikolai/subscriptions",
"organizations_url": "https://api.github.com/users/ai-nikolai/orgs",
"repos_url": "https://api.github.com/users/ai-nikolai/repos",
"events_url": "https://api.github.com/users/ai-nikolai/events{/privacy}",
"received_events_url": "https://api.github.com/users/ai-nikolai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"@lhoestq tagging you here as you were on the previous issue, hope that's fine. ",
"Thanks for reporting, let me take a look",
"I reverted this change which caused the issue #7961 , I'll do a new release soon but in the meantime feel free to install `datasets` from source",
"@lhoestq thanks a lot. I am actually checking older versions of datasets and it seems that it doesn't work with 4.2.0 as well.\n\n(hopefully that's relevant)."
] | 2026-01-23T12:06:48
| 2026-01-25T05:30:53
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Describe the bug
The problem is that when calling:
```ds.set_format(type="torch", columns = ["input", "labels"], device="cuda")```
The device type of the individual datapoints is now: `cpu` as opposed to `cuda:0`.
With `v4.0.0` it still works. With `v4.5.0` it doesn't work anymore.
Related Issue:
https://github.com/huggingface/datasets/issues/1762
### Steps to reproduce the bug
Steps to reproduce the bug:
```
ds.set_format(type="torch", columns = ["input", "labels"], device="cuda")
print(ds["train"][0]["input"].device) #outputs cpu
```
The above should be `cuda:0`.
### Expected behavior
`set_format` should be able to move data to GPU.
### Environment info
datasets==4.5.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7958/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7957
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7957/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7957/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7957/events
|
https://github.com/huggingface/datasets/pull/7957
| 3,839,573,271
|
PR_kwDODunzps6-hVW5
| 7,957
|
Fix all exhausted without replacement
|
{
"login": "ashmi8",
"id": 105732253,
"node_id": "U_kgDOBk1YnQ",
"avatar_url": "https://avatars.githubusercontent.com/u/105732253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashmi8",
"html_url": "https://github.com/ashmi8",
"followers_url": "https://api.github.com/users/ashmi8/followers",
"following_url": "https://api.github.com/users/ashmi8/following{/other_user}",
"gists_url": "https://api.github.com/users/ashmi8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashmi8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashmi8/subscriptions",
"organizations_url": "https://api.github.com/users/ashmi8/orgs",
"repos_url": "https://api.github.com/users/ashmi8/repos",
"events_url": "https://api.github.com/users/ashmi8/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashmi8/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi maintainers 👋 \r\nThis PR fixes `all_exhausted_without_replacement` to ensure all samples are emitted exactly once, \r\nand adds a regression test reproducing the reported issue.\r\n\r\nHappy to adjust the implementation or split changes if you’d prefer.\r\nThanks!\r\n",
"We just merged https://github.com/huggingface/datasets/pull/7955 which also fix the issue, thanks for investigating !\r\n\r\nI'm closing this one if you don't mind"
] | 2026-01-21T18:47:32
| 2026-01-23T16:09:28
| 2026-01-23T16:09:28
|
NONE
| null | null | null | null |
Fix interleave_datasets "all_exhausted_without_replacement" stopping strategy
- Corrected logic to ensure each sample is picked exactly once when using
stopping_strategy="all_exhausted_without_replacement".
- Adjusted boolean stopping condition to properly track dataset exhaustion.
- Added test to verify the last element is included as expected.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7957/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7957",
"html_url": "https://github.com/huggingface/datasets/pull/7957",
"diff_url": "https://github.com/huggingface/datasets/pull/7957.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7957.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7956
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7956/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7956/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7956/events
|
https://github.com/huggingface/datasets/issues/7956
| 3,839,082,498
|
I_kwDODunzps7k08AC
| 7,956
|
Is the 10k files / folder limit a hard limit for a dataset repo?
|
{
"login": "pavanramkumar",
"id": 3664715,
"node_id": "MDQ6VXNlcjM2NjQ3MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pavanramkumar",
"html_url": "https://github.com/pavanramkumar",
"followers_url": "https://api.github.com/users/pavanramkumar/followers",
"following_url": "https://api.github.com/users/pavanramkumar/following{/other_user}",
"gists_url": "https://api.github.com/users/pavanramkumar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pavanramkumar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pavanramkumar/subscriptions",
"organizations_url": "https://api.github.com/users/pavanramkumar/orgs",
"repos_url": "https://api.github.com/users/pavanramkumar/repos",
"events_url": "https://api.github.com/users/pavanramkumar/events{/privacy}",
"received_events_url": "https://api.github.com/users/pavanramkumar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null |
[
"Yes, that's a hard limit. Can you split your files into different folders? Or we'll probably have a new repo type in the near to mid future that will relax this limit a bit. ",
"Thanks! Working around this with a different sharding parameter to have fewer overall fragments (and therefore fewer files in `*.lance/data/` and `*.lance/_transactions/`"
] | 2026-01-21T16:37:38
| 2026-01-22T09:38:38
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Feature request
Can the hard limit of 10k files per folder be extended with acceptable loss in performance?
### Motivation
I'm uploading a lance dataset to huggingface hub and have a folder inside lance internals (`data/*.lance/_transactions`) that has > 20k atomic transaction records and my commits are being rejected.
```
Bad request for commit endpoint:
Your push was rejected because it contains too many files per directory. Each directory in your git repo can only contain up to 10000 files. Offending reference: refs/heads/main Offending directories: /data/expression.lance/_transactions/
```
### Your contribution
Open to suggestions for how to make a PR
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7956/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7955
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7955/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7955/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7955/events
|
https://github.com/huggingface/datasets/pull/7955
| 3,837,083,395
|
PR_kwDODunzps6-Y-EL
| 7,955
|
Fix interleave_datasets with all_exhausted_without_replacement strategy
|
{
"login": "prathamk-tw",
"id": 205576963,
"node_id": "U_kgDODEDbAw",
"avatar_url": "https://avatars.githubusercontent.com/u/205576963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prathamk-tw",
"html_url": "https://github.com/prathamk-tw",
"followers_url": "https://api.github.com/users/prathamk-tw/followers",
"following_url": "https://api.github.com/users/prathamk-tw/following{/other_user}",
"gists_url": "https://api.github.com/users/prathamk-tw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prathamk-tw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prathamk-tw/subscriptions",
"organizations_url": "https://api.github.com/users/prathamk-tw/orgs",
"repos_url": "https://api.github.com/users/prathamk-tw/repos",
"events_url": "https://api.github.com/users/prathamk-tw/events{/privacy}",
"received_events_url": "https://api.github.com/users/prathamk-tw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7955). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-21T08:10:45
| 2026-01-24T02:55:56
| 2026-01-23T16:08:39
|
CONTRIBUTOR
| null | null | null | null |
When using interleave_datasets with stopping_strategy="all_exhausted_without_replacement" and probabilities=None, the function was incorrectly falling into the undersampling branch, causing it to stop at min(lengths) instead of continuing until all datasets were exhausted.
This fix adds a specific branch to handle the all_exhausted_without_replacement case when probabilities=None. The new logic cycles through all datasets round by round, adding elements from each dataset until all are exhausted, ensuring each element appears exactly once.
Example fix:
- Input: d1=[0,1,2], d2=[10,11,12,13], d3=[20,21,22]
- Before: [0, 10, 20, 1, 11, 21, 2, 12, 22]
- After: [0, 10, 20, 1, 11, 21, 2, 12, 22, 13]
🤖 Generated with [Claude Code](https://claude.com/claude-code)
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7955/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7955/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7955",
"html_url": "https://github.com/huggingface/datasets/pull/7955",
"diff_url": "https://github.com/huggingface/datasets/pull/7955.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7955.patch",
"merged_at": "2026-01-23T16:08:39"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7954
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7954/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7954/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7954/events
|
https://github.com/huggingface/datasets/issues/7954
| 3,837,020,089
|
I_kwDODunzps7ktEe5
| 7,954
|
all_exhausted_without_replacement working same as first_exhausted
|
{
"login": "prathamk-tw",
"id": 205576963,
"node_id": "U_kgDODEDbAw",
"avatar_url": "https://avatars.githubusercontent.com/u/205576963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prathamk-tw",
"html_url": "https://github.com/prathamk-tw",
"followers_url": "https://api.github.com/users/prathamk-tw/followers",
"following_url": "https://api.github.com/users/prathamk-tw/following{/other_user}",
"gists_url": "https://api.github.com/users/prathamk-tw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prathamk-tw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prathamk-tw/subscriptions",
"organizations_url": "https://api.github.com/users/prathamk-tw/orgs",
"repos_url": "https://api.github.com/users/prathamk-tw/repos",
"events_url": "https://api.github.com/users/prathamk-tw/events{/privacy}",
"received_events_url": "https://api.github.com/users/prathamk-tw/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"https://github.com/huggingface/datasets/pull/7955"
] | 2026-01-21T07:50:31
| 2026-01-21T08:13:00
| null |
CONTRIBUTOR
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Describe the bug
>>> from datasets import Dataset, interleave_datasets
>>> d1 = Dataset.from_dict({"a": [0, 1, 2]})
>>> d2 = Dataset.from_dict({"a": [10, 11, 12, 13]})
>>> d3 = Dataset.from_dict({"a": [20, 21, 22]})
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted_without_replacement")
>>> dataset["a"][:100]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
expected output: [0, 10, 20, 1, 11, 21, 2, 12, 22,13]
datasets version 4.5.0
### Steps to reproduce the bug
>>> from datasets import Dataset, interleave_datasets
>>> d1 = Dataset.from_dict({"a": [0, 1, 2]})
>>> d2 = Dataset.from_dict({"a": [10, 11, 12, 13]})
>>> d3 = Dataset.from_dict({"a": [20, 21, 22]})
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted_without_replacement")
>>> dataset["a"][:100]
### Expected behavior
[0, 10, 20, 1, 11, 21, 2, 12, 22,13]
### Environment info
datasets version 4.5.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7954/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7953
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7953/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7953/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7953/events
|
https://github.com/huggingface/datasets/pull/7953
| 3,831,099,841
|
PR_kwDODunzps6-FI-G
| 7,953
|
#5354: replace list with Sequence in from_parquet type hints
|
{
"login": "ashmi8",
"id": 105732253,
"node_id": "U_kgDOBk1YnQ",
"avatar_url": "https://avatars.githubusercontent.com/u/105732253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashmi8",
"html_url": "https://github.com/ashmi8",
"followers_url": "https://api.github.com/users/ashmi8/followers",
"following_url": "https://api.github.com/users/ashmi8/following{/other_user}",
"gists_url": "https://api.github.com/users/ashmi8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashmi8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashmi8/subscriptions",
"organizations_url": "https://api.github.com/users/ashmi8/orgs",
"repos_url": "https://api.github.com/users/ashmi8/repos",
"events_url": "https://api.github.com/users/ashmi8/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashmi8/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-19T20:24:10
| 2026-01-20T10:20:33
| null |
NONE
| null | null | null | null |
\This PR replaces `list` type hints with `Sequence` in `from_parquet` to improve type checking.
Note: Local pytest errors on Python 3.13 due to removal of `distutils` are unrelated to this change.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7953/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7953",
"html_url": "https://github.com/huggingface/datasets/pull/7953",
"diff_url": "https://github.com/huggingface/datasets/pull/7953.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7953.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7952
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7952/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7952/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7952/events
|
https://github.com/huggingface/datasets/pull/7952
| 3,831,024,005
|
PR_kwDODunzps6-E4dA
| 7,952
|
Fix #5354: replace list with Sequence in from_parquet type hints
|
{
"login": "ashmi8",
"id": 105732253,
"node_id": "U_kgDOBk1YnQ",
"avatar_url": "https://avatars.githubusercontent.com/u/105732253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashmi8",
"html_url": "https://github.com/ashmi8",
"followers_url": "https://api.github.com/users/ashmi8/followers",
"following_url": "https://api.github.com/users/ashmi8/following{/other_user}",
"gists_url": "https://api.github.com/users/ashmi8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashmi8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashmi8/subscriptions",
"organizations_url": "https://api.github.com/users/ashmi8/orgs",
"repos_url": "https://api.github.com/users/ashmi8/repos",
"events_url": "https://api.github.com/users/ashmi8/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashmi8/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2026-01-19T19:57:55
| 2026-01-19T20:15:35
| 2026-01-19T20:15:35
|
NONE
| null | null | null | null |
This PR replaces `list` type hints with `Sequence` in `from_parquet` to improve type checking.
Note: Local pytest errors on Python 3.13 due to removal of `distutils` are unrelated to this change.
|
{
"login": "ashmi8",
"id": 105732253,
"node_id": "U_kgDOBk1YnQ",
"avatar_url": "https://avatars.githubusercontent.com/u/105732253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashmi8",
"html_url": "https://github.com/ashmi8",
"followers_url": "https://api.github.com/users/ashmi8/followers",
"following_url": "https://api.github.com/users/ashmi8/following{/other_user}",
"gists_url": "https://api.github.com/users/ashmi8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashmi8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashmi8/subscriptions",
"organizations_url": "https://api.github.com/users/ashmi8/orgs",
"repos_url": "https://api.github.com/users/ashmi8/repos",
"events_url": "https://api.github.com/users/ashmi8/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashmi8/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7952/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7952",
"html_url": "https://github.com/huggingface/datasets/pull/7952",
"diff_url": "https://github.com/huggingface/datasets/pull/7952.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7952.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7951
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7951/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7951/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7951/events
|
https://github.com/huggingface/datasets/pull/7951
| 3,827,686,741
|
PR_kwDODunzps6951bw
| 7,951
|
feat: Add GenBank file format support for biological sequence data
|
{
"login": "behroozazarkhalili",
"id": 80390531,
"node_id": "MDQ6VXNlcjgwMzkwNTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/behroozazarkhalili",
"html_url": "https://github.com/behroozazarkhalili",
"followers_url": "https://api.github.com/users/behroozazarkhalili/followers",
"following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}",
"gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}",
"starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions",
"organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs",
"repos_url": "https://api.github.com/users/behroozazarkhalili/repos",
"events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}",
"received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-19T01:59:44
| 2026-02-04T14:39:10
| null |
NONE
| null | null | null | null |
## Summary
Add native support for loading GenBank (.gb, .gbk, .genbank) files, a standard format for biological sequence data with annotations maintained by NCBI.
## Changes
- Add `genbank` packaged module with pure Python state machine parser
- Register GenBank extensions in `_PACKAGED_DATASETS_MODULES` and `_EXTENSION_TO_MODULE`
- Add comprehensive test suite (28 tests)
## Features
- **Metadata parsing**: LOCUS, DEFINITION, ACCESSION, VERSION, KEYWORDS, ORGANISM, taxonomy
- **Feature parsing**: Structured JSON output with location parsing (complement, join)
- **Sequence parsing**: ORIGIN section with automatic length calculation
- **Compression support**: gzip, bz2, xz via magic bytes detection
- **Memory efficiency**: Dual-threshold batching (batch_size + max_batch_bytes)
- **Large sequences**: Uses `large_string` Arrow type for sequences/features
## Usage
```python
from datasets import load_dataset
# Load GenBank files
ds = load_dataset("genbank", data_files="sequences.gb")
# With options
ds = load_dataset("genbank", data_files="*.gbk",
columns=["sequence", "organism", "features"],
parse_features=True)
```
## Test plan
- [x] All 28 unit tests pass
- [x] Tests cover: basic loading, multi-record, compression, feature parsing, column filtering, batching, schema types
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7951/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7951",
"html_url": "https://github.com/huggingface/datasets/pull/7951",
"diff_url": "https://github.com/huggingface/datasets/pull/7951.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7951.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7950
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7950/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7950/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7950/events
|
https://github.com/huggingface/datasets/pull/7950
| 3,825,689,242
|
PR_kwDODunzps69zsW1
| 7,950
|
Add examples for Lance datasets
|
{
"login": "prrao87",
"id": 35005448,
"node_id": "MDQ6VXNlcjM1MDA1NDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/35005448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prrao87",
"html_url": "https://github.com/prrao87",
"followers_url": "https://api.github.com/users/prrao87/followers",
"following_url": "https://api.github.com/users/prrao87/following{/other_user}",
"gists_url": "https://api.github.com/users/prrao87/gists{/gist_id}",
"starred_url": "https://api.github.com/users/prrao87/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prrao87/subscriptions",
"organizations_url": "https://api.github.com/users/prrao87/orgs",
"repos_url": "https://api.github.com/users/prrao87/repos",
"events_url": "https://api.github.com/users/prrao87/events{/privacy}",
"received_events_url": "https://api.github.com/users/prrao87/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7950). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi @prrao87 ! All the main examples with pylance should be in their dedicated documentation page IMO. Let's document Lance as a supported format in the `datasets` docs, and move the `pylance` docs and examples there instead if you don't mind: https://github.com/huggingface/hub-docs/pull/2164\r\n\r\nI think for the `datasets` docs we can simply make sure Lance is mentioned in the lists of supported formats, and have a section in the Video documentation pages alongside WebDataset for example:\r\n\r\n* [video_load.md](https://huggingface.co/docs/datasets/video_load) for loading \r\n* [video_dataset.md](https://huggingface.co/docs/datasets/video_dataset) for dataset creation\r\n* [loading_methods.md](https://huggingface.co/docs/datasets/package_reference/loading_methods) the reference which lists all loading methods and configs",
"Hi @lhoestq, what do you mean by this?\r\n> Let's document Lance as a supported format in the datasets docs\r\n\r\nWhere exactly would Lance be mentioned? I wasn't able to find a section titled \"supported formats\" in the docs. The rest of the parts of your response made sense (can make those updates). Thanks!",
"Hi @lhoestq, I've updated the docs in various places, to show how to use Lance for image creation/loading and also video creation/loading. I've also removed the `use_with_lance.mdx` page's contents over to the Hub docs by enhancing that page, as you mentioned.\r\n\r\nI think this should cover all your feedback points! Please let me know if anything's missing. I'll be back in a future PR with audio examples, I think the HF community would benefit from using Lance format for audio storage/retrieval, too.",
"Thanks! We'll get those fixed and send more PRs ;)"
] | 2026-01-17T19:33:31
| 2026-01-23T16:40:20
| 2026-01-23T16:16:43
|
CONTRIBUTOR
| null | null | null | null |
## Summary
Updated the Lance integration docs to match the official dataset cards and expand coverage of multimodal workflows in `use_with_lance.mdx`.
## Details
- Added a Lance format-focused guidance page for multimodal examples:
- Stream from the Hub via `datasets` API
- Use Lance's dataset API to pass in a `hf://...` path specifier,
- Scan, vector search, schema evolution examples
- "Export to Lance" example for materializing filtered subsets locally
- Aligned all dataset names/paths to existing datasets so users can immediately test it out: `lance-format/laion-1m` (image) and the `lance-format/openvid-lance` (video)
Depending on the questions we get from users, we may add more descriptive examples as we along, but this should be a good start!
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7950/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7950",
"html_url": "https://github.com/huggingface/datasets/pull/7950",
"diff_url": "https://github.com/huggingface/datasets/pull/7950.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7950.patch",
"merged_at": "2026-01-23T16:16:43"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7949
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7949/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7949/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7949/events
|
https://github.com/huggingface/datasets/pull/7949
| 3,824,515,306
|
PR_kwDODunzps69v3lh
| 7,949
|
docs: clarify documentation build instructions
|
{
"login": "Edge-Explorer",
"id": 192764477,
"node_id": "U_kgDOC31aPQ",
"avatar_url": "https://avatars.githubusercontent.com/u/192764477?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Edge-Explorer",
"html_url": "https://github.com/Edge-Explorer",
"followers_url": "https://api.github.com/users/Edge-Explorer/followers",
"following_url": "https://api.github.com/users/Edge-Explorer/following{/other_user}",
"gists_url": "https://api.github.com/users/Edge-Explorer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Edge-Explorer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Edge-Explorer/subscriptions",
"organizations_url": "https://api.github.com/users/Edge-Explorer/orgs",
"repos_url": "https://api.github.com/users/Edge-Explorer/repos",
"events_url": "https://api.github.com/users/Edge-Explorer/events{/privacy}",
"received_events_url": "https://api.github.com/users/Edge-Explorer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"This PR clarifies the documentation build instructions by improving wording and adding clarity for local builds."
] | 2026-01-17T06:24:24
| 2026-01-17T06:25:14
| null |
CONTRIBUTOR
| null | null | null | null |
docs: clarify documentation build instructions
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7949/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7949",
"html_url": "https://github.com/huggingface/datasets/pull/7949",
"diff_url": "https://github.com/huggingface/datasets/pull/7949.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7949.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7948
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7948/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7948/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7948/events
|
https://github.com/huggingface/datasets/pull/7948
| 3,824,437,597
|
PR_kwDODunzps69vnHh
| 7,948
|
json: add optional return_file_name parameter
|
{
"login": "Sachin-0001",
"id": 168210869,
"node_id": "U_kgDOCgaxtQ",
"avatar_url": "https://avatars.githubusercontent.com/u/168210869?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sachin-0001",
"html_url": "https://github.com/Sachin-0001",
"followers_url": "https://api.github.com/users/Sachin-0001/followers",
"following_url": "https://api.github.com/users/Sachin-0001/following{/other_user}",
"gists_url": "https://api.github.com/users/Sachin-0001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sachin-0001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sachin-0001/subscriptions",
"organizations_url": "https://api.github.com/users/Sachin-0001/orgs",
"repos_url": "https://api.github.com/users/Sachin-0001/repos",
"events_url": "https://api.github.com/users/Sachin-0001/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sachin-0001/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-17T05:39:43
| 2026-01-17T05:39:43
| null |
NONE
| null | null | null | null |
This PR adds an optional `return_file_name` parameter to the JSON dataset loader.
When enabled, a new `file_name` column is added containing the source file name
for each row. Default behavior is unchanged.
Changes:
- Add `return_file_name` to JsonConfig
- Append file name during JSON table generation
- Add tests covering default and enabled behavior, and ensures other functions are not affected
Motivation:
This helps resume training from checkpoints by identifying already-consumed data shards.
Fixes #5806
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7948/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7948",
"html_url": "https://github.com/huggingface/datasets/pull/7948",
"diff_url": "https://github.com/huggingface/datasets/pull/7948.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7948.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7947
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7947/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7947/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7947/events
|
https://github.com/huggingface/datasets/issues/7947
| 3,823,309,787
|
I_kwDODunzps7j4xPb
| 7,947
|
MMLU get_dataset_config_names provides different lists of subsets in online and offline modes
|
{
"login": "rikrd",
"id": 15324,
"node_id": "MDQ6VXNlcjE1MzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/15324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rikrd",
"html_url": "https://github.com/rikrd",
"followers_url": "https://api.github.com/users/rikrd/followers",
"following_url": "https://api.github.com/users/rikrd/following{/other_user}",
"gists_url": "https://api.github.com/users/rikrd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rikrd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rikrd/subscriptions",
"organizations_url": "https://api.github.com/users/rikrd/orgs",
"repos_url": "https://api.github.com/users/rikrd/repos",
"events_url": "https://api.github.com/users/rikrd/events{/privacy}",
"received_events_url": "https://api.github.com/users/rikrd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-16T19:20:08
| 2026-01-16T19:20:08
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Describe the bug
When getting the config names of `cais/mmlu` in online mode, it provides the different subjects and `all` but in offline mode is provides `default` even with cached version of the dataset.
### Steps to reproduce the bug
1. First download dataset in online mode so that it is cached:
```
$ HF_DATASETS_OFFLINE=0 HF_HOME="/tmp/hftemp" python -c "import datasets;print(datasets.get_dataset_config_names('cais/mmlu'));datasets.load_dataset('cais/mmlu', 'all')"
['abstract_algebra', 'all', 'anatomy', 'astronomy', 'auxiliary_train', 'business_ethics', 'clinical_knowledge', 'college_biology', 'college_chemistry', 'college_computer_science', 'college_mathematics', 'college_medicine', 'college_physics', 'computer_security', 'conceptual_physics', 'econometrics', 'electrical_engineering', 'elementary_mathematics', 'formal_logic', 'global_facts', 'high_school_biology', 'high_school_chemistry', 'high_school_computer_science', 'high_school_european_history', 'high_school_geography', 'high_school_government_and_politics', 'high_school_macroeconomics', 'high_school_mathematics', 'high_school_microeconomics', 'high_school_physics', 'high_school_psychology', 'high_school_statistics', 'high_school_us_history', 'high_school_world_history', 'human_aging', 'human_sexuality', 'international_law', 'jurisprudence', 'logical_fallacies', 'machine_learning', 'management', 'marketing', 'medical_genetics', 'miscellaneous', 'moral_disputes', 'moral_scenarios', 'nutrition', 'philosophy', 'prehistory', 'professional_accounting', 'professional_law', 'professional_medicine', 'professional_psychology', 'public_relations', 'security_studies', 'sociology', 'us_foreign_policy', 'virology', 'world_religions']
all/test-00000-of-00001.parquet: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.50M/3.50M [00:00<00:00, 3.78MB/s]
all/validation-00000-of-00001.parquet: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 408k/408k [00:00<00:00, 1.90MB/s]
all/dev-00000-of-00001.parquet: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 76.5k/76.5k [00:00<00:00, 298kB/s]
all/auxiliary_train-00000-of-00001.parqu(…): 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 47.5M/47.5M [00:00<00:00, 47.6MB/s]
Generating test split: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14042/14042 [00:01<00:00, 10914.95 examples/s]
Generating validation split: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1531/1531 [00:00<00:00, 271131.54 examples/s]
Generating dev split: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 285/285 [00:00<00:00, 86308.78 examples/s]
Generating auxiliary_train split: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 99842/99842 [00:00<00:00, 202836.59 examples/s]
```
2. Verify the config names of in offline mode (the dataset has been cached).
```
$ HF_DATASETS_OFFLINE=1 HF_HOME="/tmp/hftemp" python -c "import datasets;print(datasets.get_dataset_config_names('cais/mmlu'))"
Using the latest cached version of the dataset since cais/mmlu couldn't be found on the Hugging Face Hub (offline mode is enabled).
['default']
```
### Expected behavior
It should return the same list of configs in both cases.
### Environment info
$ python -c "import datasets;print(datasets.__version__)"
4.4.1
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7947/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7946
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7946/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7946/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7946/events
|
https://github.com/huggingface/datasets/issues/7946
| 3,817,678,454
|
I_kwDODunzps7jjSZ2
| 7,946
|
Question: Is there a faster way to push_to_hub for large image datasets?
|
{
"login": "adithya-s-k",
"id": 27956426,
"node_id": "MDQ6VXNlcjI3OTU2NDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/27956426?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adithya-s-k",
"html_url": "https://github.com/adithya-s-k",
"followers_url": "https://api.github.com/users/adithya-s-k/followers",
"following_url": "https://api.github.com/users/adithya-s-k/following{/other_user}",
"gists_url": "https://api.github.com/users/adithya-s-k/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adithya-s-k/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adithya-s-k/subscriptions",
"organizations_url": "https://api.github.com/users/adithya-s-k/orgs",
"repos_url": "https://api.github.com/users/adithya-s-k/repos",
"events_url": "https://api.github.com/users/adithya-s-k/events{/privacy}",
"received_events_url": "https://api.github.com/users/adithya-s-k/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"This is a really interesting approach, especially combining parallel parquet\nconversion with upload_large_folder and hf_xet.\n\nOne question / observation:\nThe shard calculation currently uses max_shard_size_mb as a proxy for\nsamples_per_shard. Since sample size can vary a lot across datasets,\nwould it make sense to estimate shard size based on byte size instead?\n\nAlso curious whether ThreadPoolExecutor was chosen intentionally\n(over multiprocessing) due to to_parquet being more IO-bound.\n\nIf this direction makes sense, I’d be happy to test on a large dataset\nand share numbers.\n",
"The approach makes a lot of sense :)\n\n* parallel parquet conversions with threads for speed\n* writing parquet files to disk prior to upload instead of in RAM to save some RAM\n* uploading in parallel with upload_large_folder for speed\n\n\nNote that there is a helper for estimating the size of the dataset: `ds._estimate_nbytes()`\n\nThis is the kind of improvements that would be helpful to have in `push_to_hub` ! Maybe one point to take into account is some datasets can be larger than disk, so we would have to go one level lower than upload_large_folder:\n\n1. convert to Parquet on disk\n2. pre-upload the file and delete it locally\n3. then at the end commit the uploaded files\n\nContributions are welcome if someone would like to improve `push_to_hub` in `datasets` btw. \n\nA summary of what we could achieve with some inspiration from your approach:\n\n* parallel parquet conversions with threads for speed\n* writing parquet files to disk prior to upload instead of in RAM to save some RAM\n* upload local parquet files once ready with preupload_lfs_files() and delete them to not fill the disk\n* commit at the end all the uploaded files",
"Thanks @lhoestq! Valid point about datasets larger than disk.\n\nWould you be open to adding `upload_large_folder` as an **opt-in** parameter? The key benefit is **resumability** - if an upload fails at 80%, it picks up where it left off. For multi-hour uploads of large image datasets, this is really valuable.\n\n```python\ndataset.push_to_hub(\"user/repo\", use_large_folder=True)\n```\n\nUsers opting in would accept the disk space trade-off for resumability + atomic commits. Default behavior stays unchanged, so larger-than-disk datasets still work fine with the current approach.\n\nThanks for the `_estimate_nbytes()` pointer - will use that for proper shard sizing. Happy to open a PR if this direction works!",
"Great questions @k281484-ctrl!\n\n**Shard sizing**: You're right - the sample-based approach in my script was a quick hack. @lhoestq pointed out `ds._estimate_nbytes()` which is exactly what we need for proper byte-based shard calculation. Will use that in the implementation.\n\n**ThreadPoolExecutor**: Yes, intentional choice. `to_parquet()` is I/O-bound (writing to disk), so threads work well and avoid the overhead of spawning processes + pickling data across process boundaries.\n\nWould love to see your benchmark numbers if you get a chance to test!"
] | 2026-01-15T13:54:37
| 2026-01-17T13:27:20
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
# Question: Is there a faster way to `push_to_hub` for large image datasets? Or could this approach be integrated?
Hi! I frequently work with large image datasets (100k-300k+ samples) and found that `dataset.push_to_hub()` can be quite slow.
cc @lhoestq - would love your thoughts on this!
I experimented with an alternative approach using `upload_large_folder` and parallel parquet conversion that gave me significant speedups:
**My benchmark (15,000 image samples, ~16GB total):**
- `dataset.push_to_hub()`: **~15 minutes**
- Alternative approach below: **~2-3 minutes**
I wanted to ask:
1. **Is there an existing way to achieve this speed** that I'm missing in the current API?
2. **If not, would something like this be worth integrating** into the library?
## The Approach
The key differences from standard `push_to_hub()`:
1. **Parallel parquet shard conversion** using ThreadPoolExecutor
2. **Using `upload_large_folder`** instead of regular upload (multi-threaded, resumable)
3. **Enabling `HF_XET_HIGH_PERFORMANCE=1`** for chunk-level deduplication
## Full Working Code
Here's the script I've been using:
```python
#!/usr/bin/env python3
"""
Fast HuggingFace Dataset Upload Script
Converts any dataset to parquet shards and uploads using upload_large_folder
for maximum speed with parallel workers and hf_xet chunk deduplication.
Usage:
python push_to_hub_fast.py --input ./my_dataset_arrow --repo "username/my-dataset"
python push_to_hub_fast.py --input ./data --repo "user/repo" --workers 32 --shard-size 400
"""
import argparse
import os
import shutil
import tempfile
import multiprocessing
from pathlib import Path
from concurrent.futures import ThreadPoolExecutor, as_completed
from typing import Optional
import yaml
def get_optimal_workers() -> int:
"""Detect optimal number of workers based on CPU cores."""
cpu_count = multiprocessing.cpu_count()
return max(1, cpu_count - 1)
def generate_dataset_card(dataset, num_samples: int, repo_name: str) -> str:
"""Generate a HuggingFace dataset card README with proper metadata."""
from datasets import Value, Sequence, Image
features = dataset.features
def feature_to_dict(feat):
feat_type = type(feat).__name__
if isinstance(feat, Value):
return {"dtype": feat.dtype}
elif isinstance(feat, Image):
return {"dtype": "image"}
elif isinstance(feat, Sequence):
if hasattr(feat.feature, "dtype"):
return {"list": feat.feature.dtype}
elif hasattr(feat.feature, "__iter__"):
inner = []
for k, v in feat.feature.items():
inner.append({"name": k, **feature_to_dict(v)})
return {"list": inner}
else:
return {"list": feature_to_dict(feat.feature)}
elif hasattr(feat, "__iter__") and not isinstance(feat, str):
inner = []
for k, v in feat.items():
inner.append({"name": k, **feature_to_dict(v)})
return {"list": inner}
else:
return {"dtype": str(feat_type).lower()}
features_list = []
for name, feat in features.items():
feat_dict = {"name": name}
feat_dict.update(feature_to_dict(feat))
features_list.append(feat_dict)
dataset_info = {
"dataset_info": {
"features": features_list,
"splits": [{"name": "train", "num_examples": num_samples}],
},
"configs": [
{
"config_name": "default",
"data_files": [{"split": "train", "path": "data/train-*"}],
}
],
}
yaml_content = yaml.dump(
dataset_info, default_flow_style=False, sort_keys=False, allow_unicode=True
)
# Build README string with YAML frontmatter and usage instructions
readme = "---\n" + yaml_content + "---\n\n"
readme += "# " + repo_name.split("/")[-1] + "\n\n"
readme += "## Usage\n\n```python\nfrom datasets import load_dataset\n"
readme += "dataset = load_dataset(\"" + repo_name + "\")\n```\n\n"
readme += "- **Samples**: " + f"{num_samples:,}" + "\n"
readme += "- **Features**: " + ", ".join(f"`{name}`" for name in features.keys())
return readme
def push_to_hub_fast(
input_path: str,
repo_id: str,
num_workers: Optional[int] = None,
max_shard_size_mb: int = 450,
split: str = "train",
private: bool = False,
token: Optional[str] = None,
):
"""
Fast upload dataset to HuggingFace Hub.
Args:
input_path: Path to dataset (local arrow/disk) or HuggingFace dataset name
repo_id: Target repository (e.g., "username/dataset-name")
num_workers: Number of parallel workers (auto-detected if None)
max_shard_size_mb: Target shard size in MB (default 450 for <500MB viewer limit)
split: Dataset split to upload (default "train")
private: Whether to make repo private
token: HuggingFace token (uses cached login if None)
"""
os.environ["HF_XET_HIGH_PERFORMANCE"] = "1"
from datasets import load_from_disk, load_dataset, DatasetDict
from huggingface_hub import HfApi, login
if num_workers is None:
num_workers = get_optimal_workers()
print(f"Push to Hub Fast")
print(f"=" * 50)
print(f"Input: {input_path}")
print(f"Target: {repo_id}")
print(f"Workers: {num_workers}")
print(f"=" * 50)
# Login
if token:
login(token=token)
else:
token = os.environ.get("HF_TOKEN") or os.environ.get("HUGGINGFACE_TOKEN")
if token:
login(token=token)
print("✓ Logged in to HuggingFace Hub")
# Load dataset
print(f"\nLoading dataset from {input_path}...")
input_path_obj = Path(input_path)
if input_path_obj.exists():
try:
loaded = load_from_disk(input_path)
except Exception:
loaded = load_dataset("imagefolder", data_dir=input_path)
else:
loaded = load_dataset(input_path)
if isinstance(loaded, DatasetDict):
if split in loaded:
dataset = loaded[split]
else:
split = list(loaded.keys())[0]
dataset = loaded[split]
else:
dataset = loaded
num_samples = len(dataset)
print(f"✓ Dataset loaded: {num_samples:,} samples")
# Create temp directory
temp_dir = tempfile.mkdtemp(prefix="hf_upload_")
upload_dir = Path(temp_dir)
data_dir = upload_dir / "data"
data_dir.mkdir(parents=True, exist_ok=True)
try:
# Generate README
print("\nGenerating dataset card...")
readme_content = generate_dataset_card(dataset, num_samples, repo_id)
with open(upload_dir / "README.md", "w") as f:
f.write(readme_content)
print("✓ README.md generated")
# Calculate shards
samples_per_shard = max(100, max_shard_size_mb)
num_shards = max(1, (num_samples + samples_per_shard - 1) // samples_per_shard)
print(f"\nConverting to {num_shards} parquet shards...")
print(f"Using {num_workers} parallel workers...")
def convert_shard(shard_idx):
shard = dataset.shard(num_shards=num_shards, index=shard_idx)
shard_path = data_dir / f"train-{shard_idx:05d}-of-{num_shards:05d}.parquet"
shard.to_parquet(str(shard_path))
return shard_idx, len(shard)
# Parallel conversion
completed = 0
with ThreadPoolExecutor(max_workers=num_workers) as executor:
futures = {executor.submit(convert_shard, i): i for i in range(num_shards)}
for future in as_completed(futures):
shard_idx, shard_len = future.result()
completed += 1
print(f" [{completed}/{num_shards}] Shard {shard_idx}: {shard_len} samples")
parquet_files = list(data_dir.glob("*.parquet"))
total_size = sum(f.stat().st_size for f in parquet_files)
print(f"✓ Created {len(parquet_files)} shards ({total_size / 1e9:.2f} GB)")
# Create repo & upload
api = HfApi()
api.create_repo(repo_id=repo_id, repo_type="dataset", private=private, exist_ok=True)
print(f"\n✓ Repository ready: {repo_id}")
print(f"\nUploading with upload_large_folder ({num_workers} workers)...")
print("Using hf_xet HIGH_PERFORMANCE mode")
api.upload_large_folder(
folder_path=str(upload_dir),
repo_id=repo_id,
repo_type="dataset",
num_workers=num_workers,
)
print(f"\n✓ Uploaded to: https://huggingface.co/datasets/{repo_id}")
return {"status": "success", "repo_id": repo_id, "samples": num_samples}
finally:
shutil.rmtree(temp_dir, ignore_errors=True)
print("✓ Cleaned up temp files")
def main():
parser = argparse.ArgumentParser(description="Fast upload dataset to HuggingFace Hub")
parser.add_argument("--input", "-i", required=True, help="Input path or HF dataset name")
parser.add_argument("--repo", "-r", required=True, help="Target HF repo")
parser.add_argument("--workers", "-w", type=int, default=None, help="Parallel workers")
parser.add_argument("--shard-size", "-s", type=int, default=450, help="Max shard size MB")
parser.add_argument("--split", default="train", help="Dataset split")
parser.add_argument("--private", action="store_true", help="Make repo private")
parser.add_argument("--token", default=None, help="HF token")
args = parser.parse_args()
push_to_hub_fast(
input_path=args.input,
repo_id=args.repo,
num_workers=args.workers,
max_shard_size_mb=args.shard_size,
split=args.split,
private=args.private,
token=args.token,
)
if __name__ == "__main__":
main()
```
## Why This Is Faster
1. **Parallel parquet conversion**: Instead of sequential shard creation, uses `ThreadPoolExecutor` to convert multiple shards simultaneously
2. **`upload_large_folder` benefits** ([docs](https://huggingface.co/docs/huggingface_hub/en/guides/upload#upload-a-large-folder)):
- Multi-threaded uploads with `num_workers`
- Resumable - caches progress locally
- Resilient - auto-retries on transient errors
3. **hf_xet chunk deduplication**: With `HF_XET_HIGH_PERFORMANCE=1`, uploads reached **3.45 GB/s** vs ~25-40 MB/s with standard upload
## Test Dataset
I tested this with an image dataset containing:
- **15,000 samples** (~16GB total)
- Each sample has: image, layout data, markdown, HTML, VQA pairs
- Average ~1MB per sample
## Questions
1. Is there something in the current API that achieves similar performance that I'm missing?
2. If not, would this be a useful addition to the library? Happy to contribute a PR if so.
Thanks for the amazing library!
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7946/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7945
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7945/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7945/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7945/events
|
https://github.com/huggingface/datasets/pull/7945
| 3,814,399,493
|
PR_kwDODunzps69ODBk
| 7,945
|
set dev version
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7945). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-14T18:34:50
| 2026-01-14T18:37:30
| 2026-01-14T18:34:56
|
MEMBER
| null | null | null | null | null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7945/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7945",
"html_url": "https://github.com/huggingface/datasets/pull/7945",
"diff_url": "https://github.com/huggingface/datasets/pull/7945.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7945.patch",
"merged_at": "2026-01-14T18:34:56"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7944
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7944/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7944/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7944/events
|
https://github.com/huggingface/datasets/pull/7944
| 3,814,369,524
|
PR_kwDODunzps69N8bB
| 7,944
|
Release: 4.5.0
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7944). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-14T18:27:17
| 2026-01-14T18:30:23
| 2026-01-14T18:28:25
|
MEMBER
| null | null | null | null | null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7944/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7944",
"html_url": "https://github.com/huggingface/datasets/pull/7944",
"diff_url": "https://github.com/huggingface/datasets/pull/7944.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7944.patch",
"merged_at": "2026-01-14T18:28:25"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7943
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7943/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7943/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7943/events
|
https://github.com/huggingface/datasets/pull/7943
| 3,809,778,662
|
PR_kwDODunzps68-rLO
| 7,943
|
Add _generate_shards
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7943). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-13T17:10:03
| 2026-01-14T16:46:53
| 2026-01-14T16:46:51
|
MEMBER
| null | null | null | null |
Useful to list a dataset's shards:
```python
from datasets import load_dataset_builder, StreamingDownloadManager
dlm = StreamingDownloadManager()
def get_shards(dataset_name, *args, **kwargs):
b = load_dataset_builder(dataset_name, *args, **kwargs)
splits = b._split_generators(dlm)
return list(b._generate_shards(**splits[0].gen_kwargs))
print(get_shards("username/dataset_name"))
# ['hf://datasets/...', ...]
```
I'll use this in combination with https://github.com/huggingface/datasets/pull/7897 in the Dataset Viewer for the and API endpoint that does {dataset, config, split, offset, limit} -> [{fileUri, offset, limit}]. This will be useful to edit datasets since your can get a row's location inside a dataset cc @cfahlgren1
This will be similar to https://github.com/huggingface/dataset-viewer/pull/3276 but works for any dataset format: csv, json, webdataset, images etc.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7943/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7943/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7943",
"html_url": "https://github.com/huggingface/datasets/pull/7943",
"diff_url": "https://github.com/huggingface/datasets/pull/7943.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7943.patch",
"merged_at": "2026-01-14T16:46:50"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7942
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7942/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7942/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7942/events
|
https://github.com/huggingface/datasets/pull/7942
| 3,808,890,451
|
PR_kwDODunzps687sR_
| 7,942
|
add _OverridableIOWrapper
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7942). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-13T13:37:09
| 2026-01-13T13:40:21
| 2026-01-13T13:38:02
|
MEMBER
| null | null | null | null |
fix https://github.com/huggingface/datasets/issues/7936
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7942/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7942",
"html_url": "https://github.com/huggingface/datasets/pull/7942",
"diff_url": "https://github.com/huggingface/datasets/pull/7942.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7942.patch",
"merged_at": "2026-01-13T13:38:02"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7941
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7941/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7941/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7941/events
|
https://github.com/huggingface/datasets/pull/7941
| 3,807,800,603
|
PR_kwDODunzps684EZa
| 7,941
|
Remove Python 3.7 and Python 2 code paths from _dill.py
|
{
"login": "tboerstad",
"id": 4872288,
"node_id": "MDQ6VXNlcjQ4NzIyODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4872288?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tboerstad",
"html_url": "https://github.com/tboerstad",
"followers_url": "https://api.github.com/users/tboerstad/followers",
"following_url": "https://api.github.com/users/tboerstad/following{/other_user}",
"gists_url": "https://api.github.com/users/tboerstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tboerstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tboerstad/subscriptions",
"organizations_url": "https://api.github.com/users/tboerstad/orgs",
"repos_url": "https://api.github.com/users/tboerstad/repos",
"events_url": "https://api.github.com/users/tboerstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/tboerstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-13T08:44:31
| 2026-01-13T08:44:31
| null |
NONE
| null | null | null | null |
This PR simplifies the code pickle handling to only support Python 3.9+.
Datasets requires Python 3.9+ (since PR #7474).
There's some dill specific code branches checking for earlier versions of python which can be removed.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7941/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7941",
"html_url": "https://github.com/huggingface/datasets/pull/7941",
"diff_url": "https://github.com/huggingface/datasets/pull/7941.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7941.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7940
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7940/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7940/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7940/events
|
https://github.com/huggingface/datasets/pull/7940
| 3,807,386,503
|
PR_kwDODunzps682sBj
| 7,940
|
Improve readability and documentation of indexing integration tests
|
{
"login": "DeeptiAgarwal16",
"id": 115862867,
"node_id": "U_kgDOBuftUw",
"avatar_url": "https://avatars.githubusercontent.com/u/115862867?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DeeptiAgarwal16",
"html_url": "https://github.com/DeeptiAgarwal16",
"followers_url": "https://api.github.com/users/DeeptiAgarwal16/followers",
"following_url": "https://api.github.com/users/DeeptiAgarwal16/following{/other_user}",
"gists_url": "https://api.github.com/users/DeeptiAgarwal16/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DeeptiAgarwal16/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DeeptiAgarwal16/subscriptions",
"organizations_url": "https://api.github.com/users/DeeptiAgarwal16/orgs",
"repos_url": "https://api.github.com/users/DeeptiAgarwal16/repos",
"events_url": "https://api.github.com/users/DeeptiAgarwal16/events{/privacy}",
"received_events_url": "https://api.github.com/users/DeeptiAgarwal16/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-13T06:42:07
| 2026-01-13T06:42:07
| null |
NONE
| null | null | null | null |
### Summary
This PR improves the readability and maintainability of the indexing integration tests by adding clear, detailed comments throughout the test suite.
### Motivation
The indexing tests cover multiple backends (FAISS and Elasticsearch) and involve non-trivial workflows such as vector creation, indexing, querying, and serialization. Adding explanatory comments helps new contributors and reviewers understand the intent of each test case more easily.
### What’s Changed
- Added descriptive docstrings to test classes and methods
- Included inline comments explaining:
- Dataset construction
- Index creation and configuration
- Search and batch search behavior
- Error handling and validation logic
- Serialization and cleanup steps
- No functional or behavioral changes
### Scope
- Documentation and readability improvements only
- No changes to test logic, APIs, or expected behavior
### Impact
This change lowers the barrier for new contributors, improves code comprehension, and makes future maintenance easier without affecting test coverage or performance.
### Checklist
- [x] No functional changes introduced
- [x] Tests pass locally
- [x] Follows existing project style and conventions
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7940/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7940",
"html_url": "https://github.com/huggingface/datasets/pull/7940",
"diff_url": "https://github.com/huggingface/datasets/pull/7940.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7940.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7939
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7939/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7939/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7939/events
|
https://github.com/huggingface/datasets/issues/7939
| 3,806,889,870
|
I_kwDODunzps7i6IeO
| 7,939
|
datasets.load_from_disk progress bar optional manual control
|
{
"login": "Tommigun1980",
"id": 60286968,
"node_id": "MDQ6VXNlcjYwMjg2OTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/60286968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tommigun1980",
"html_url": "https://github.com/Tommigun1980",
"followers_url": "https://api.github.com/users/Tommigun1980/followers",
"following_url": "https://api.github.com/users/Tommigun1980/following{/other_user}",
"gists_url": "https://api.github.com/users/Tommigun1980/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tommigun1980/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tommigun1980/subscriptions",
"organizations_url": "https://api.github.com/users/Tommigun1980/orgs",
"repos_url": "https://api.github.com/users/Tommigun1980/repos",
"events_url": "https://api.github.com/users/Tommigun1980/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tommigun1980/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null |
[] | 2026-01-13T03:19:13
| 2026-01-13T03:19:13
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Feature request
This is tangentially related to [https://github.com/huggingface/datasets/issues/7918](https://github.com/huggingface/datasets/issues/7918).
When loading a dataset with > 16 files a progress bar is shown (unless stdout is redirected or [https://github.com/huggingface/datasets/pull/7919](https://github.com/huggingface/datasets/pull/7919) is merged).
However, if you use multiple processes with data sharding, where each core loads the dataset, you get multiple copies of the progress bar (all fighting each other). It would be greatly appreciated if `datasets.load_from_disk` accepted an argument for whether to show a progress bar; default could be `None` which would retain current functionality (i.e. show if > 16 files in dataset), but user could also force the progress bar on or off as needed. Essentially just expose the progress bar visibility argument to the method's argument so that user can control it instead of it being hardcoded, where `None` would be default argument and retain current functionality.
### Motivation
Progress bar could be forced off in all processes than one, to avoid progress bar fighting and log spam.
Progress bar could also be manually forced on and off for other use cases.
### Your contribution
Possibly do a PR if this is accepted.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7939/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7938
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7938/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7938/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7938/events
|
https://github.com/huggingface/datasets/pull/7938
| 3,804,486,642
|
PR_kwDODunzps68tYX1
| 7,938
|
Fix method to retrieve attributes from file object
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7938). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-12T14:08:31
| 2026-01-12T14:13:12
| 2026-01-12T14:10:12
|
MEMBER
| null | null | null | null |
fix http://github.com/huggingface/datasets/issues/7936
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7938/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7938",
"html_url": "https://github.com/huggingface/datasets/pull/7938",
"diff_url": "https://github.com/huggingface/datasets/pull/7938.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7938.patch",
"merged_at": "2026-01-12T14:10:11"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7937
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7937/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7937/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7937/events
|
https://github.com/huggingface/datasets/pull/7937
| 3,803,185,984
|
PR_kwDODunzps68pBId
| 7,937
|
Fix duplicate log messages by disabling log propagation by default
|
{
"login": "tboerstad",
"id": 4872288,
"node_id": "MDQ6VXNlcjQ4NzIyODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4872288?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tboerstad",
"html_url": "https://github.com/tboerstad",
"followers_url": "https://api.github.com/users/tboerstad/followers",
"following_url": "https://api.github.com/users/tboerstad/following{/other_user}",
"gists_url": "https://api.github.com/users/tboerstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tboerstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tboerstad/subscriptions",
"organizations_url": "https://api.github.com/users/tboerstad/orgs",
"repos_url": "https://api.github.com/users/tboerstad/repos",
"events_url": "https://api.github.com/users/tboerstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/tboerstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-12T08:03:18
| 2026-01-13T08:10:24
| null |
NONE
| null | null | null | null |
This PR fixes an issue where applications that configure logging see duplicate messages from `datasets`:
```python
import logging
logging.basicConfig(level=logging.WARNING)
from datasets.utils.logging import get_logger
get_logger("datasets.load").warning("This appears twice")
```
Outputs:
```
This appears twice
WARNING:datasets.load:This appears twice
```
This non-standard behaviour breaks default logging behaviour. The docstring for `disable_propagation()` incorrectly says: [Note that log propagation is disabled by default](https://github.com/huggingface/datasets/blob/6a1bc355a0ca2c8f9f5c10698215212f0f14e7b7/src/datasets/utils/logging.py#L161C1-L162C1)
Perhaps this was copied over from `transformers`, which disables log propogation by default, unless it's running under CI? [library_root_logger.propagate = is_ci
](https://github.com/huggingface/transformers/blob/37974267efefe020168ff27081fbab8bbce04720/src/transformers/utils/logging.py#L103)
To restore the old behaviour, users can do:
```
import datasets
datasets.logging.enable_propagation()
```
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7937/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7937",
"html_url": "https://github.com/huggingface/datasets/pull/7937",
"diff_url": "https://github.com/huggingface/datasets/pull/7937.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7937.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7936
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7936/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7936/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7936/events
|
https://github.com/huggingface/datasets/issues/7936
| 3,795,750,271
|
I_kwDODunzps7iPo1_
| 7,936
|
_add_retries_to_file_obj_read_method makes file_obj invalid for pyarrow
|
{
"login": "li-yi-dong",
"id": 73142299,
"node_id": "MDQ6VXNlcjczMTQyMjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/73142299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/li-yi-dong",
"html_url": "https://github.com/li-yi-dong",
"followers_url": "https://api.github.com/users/li-yi-dong/followers",
"following_url": "https://api.github.com/users/li-yi-dong/following{/other_user}",
"gists_url": "https://api.github.com/users/li-yi-dong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/li-yi-dong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/li-yi-dong/subscriptions",
"organizations_url": "https://api.github.com/users/li-yi-dong/orgs",
"repos_url": "https://api.github.com/users/li-yi-dong/repos",
"events_url": "https://api.github.com/users/li-yi-dong/events{/privacy}",
"received_events_url": "https://api.github.com/users/li-yi-dong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"hmm not sure how to fix this, I believe `file_obj.__getattr__ = lambda _, attr: getattr(orig_file_obj, attr)` would make all the methods point to the original file_obj",
"> hmm not sure how to fix this, I believe `file_obj.__getattr__ = lambda _, attr: getattr(orig_file_obj, attr)` would make all the methods point to the original file_obj\n\nCould you verify by executing\n```python\nfrom datasets.utils.file_utils import xopen\nf = xopen('hdfs://xxxx.parquet', 'rb')\nf.readable()\n```\nIf it's indeed a bug, I think all data files that using pyarrow would break.",
"Just found the issue and merged a quick fix, feel free to install `datasets` from source and let me know if it works !",
"> Just found the issue and merged a quick fix, feel free to install `datasets` from source and let me know if it works !\n\nIt still not working 🥹\n\n<img width=\"1216\" height=\"348\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/a68e8f3d-2491-4616-9777-951c02c88580\" />\n\n<img width=\"1780\" height=\"962\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/9ae8f799-0d24-40ac-8cae-6f5a77d84dec\" />",
"Arf sorry ! I opened https://github.com/huggingface/datasets/pull/7942, hopefully it's alright now ^^' feel free to try it out",
"It works. Thx a lot.",
"Yay !"
] | 2026-01-09T07:05:25
| 2026-01-14T13:47:50
| 2026-01-13T13:38:03
|
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Describe the bug
I'm trying to use `load_dataset` to construct a dataset that read parquet data on HDFS streamingly, like
```python
ds = load_dataset(
"parquet",
data_files={
"train": "hdfs://xxx/train*.parquet",
"test": "hdfs://xxx/test*.parquet"
},
streaming=True,
)
```
I encountered an error
<img width="1784" height="662" alt="Image" src="https://github.com/user-attachments/assets/14f25602-ef37-4a84-83fc-dac426451163" />
In file src/datasets/packaged_modules/parquet/parquet.py,
```python
with open(file, "rb") as f:
self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f))
```
The `open` is replaced with `xopen` in src/datasets/utils/file_utils.py
In the func `_add_retries_to_file_obj_read_method`, the original file object would be replaced by io.RawIOBase(). Even though it tried to proxy all methods back to the original file object, it still unusable for pyarrow.
```python
try:
file_obj.read = read_with_retries
except AttributeError: # read-only attribute
orig_file_obj = file_obj
file_obj = io.RawIOBase()
file_obj.read = read_with_retries
file_obj.__getattr__ = lambda _, attr: getattr(orig_file_obj, attr)
return file_obj
```
For example, the original `file_obj.readable() == True`, while the new `file_obj.readable() == False`
### Steps to reproduce the bug
```python
from datasets.utils.file_utils import xopen
f = xopen('hdfs://xxxx.parquet', 'rb')
f.readable()
```
### Expected behavior
Not sure
### Environment info
Datasets 4.4.2
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7936/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7935
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7935/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7935/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7935/events
|
https://github.com/huggingface/datasets/pull/7935
| 3,795,376,274
|
PR_kwDODunzps68QDVY
| 7,935
|
Bug fix: Add HDFS hostname to protocol prefix
|
{
"login": "li-yi-dong",
"id": 73142299,
"node_id": "MDQ6VXNlcjczMTQyMjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/73142299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/li-yi-dong",
"html_url": "https://github.com/li-yi-dong",
"followers_url": "https://api.github.com/users/li-yi-dong/followers",
"following_url": "https://api.github.com/users/li-yi-dong/following{/other_user}",
"gists_url": "https://api.github.com/users/li-yi-dong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/li-yi-dong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/li-yi-dong/subscriptions",
"organizations_url": "https://api.github.com/users/li-yi-dong/orgs",
"repos_url": "https://api.github.com/users/li-yi-dong/repos",
"events_url": "https://api.github.com/users/li-yi-dong/events{/privacy}",
"received_events_url": "https://api.github.com/users/li-yi-dong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! is it related to https://github.com/huggingface/datasets/issues/7934 ?\r\n\r\nIt's not clear to me why the protocol would need this, given hostname should be present in `pattern` already\r\n\r\n```python\r\nresolve_pattern(\"hdfs://hostname/user/xxx\", ...)\r\n```",
"> Hi ! is it related to #7934 ?\r\n> \r\n> It's not clear to me why the protocol would need this, given hostname should be present in `pattern` already\r\n> \r\n> ```python\r\n> resolve_pattern(\"hdfs://hostname/user/xxx\", ...)\r\n> ```\r\n\r\nIt's related to #7934 in a subttle way. In my use case, I need to specify the hdfs hostname. In theory, I can do it by\r\n```python\r\nds = load_dataset(\r\n \"parquet\",\r\n data_files={\r\n \"train\": \"hdfs://hostname/xxx*.parquet\",\r\n },\r\n streaming=True,\r\n)\r\n```\r\nor\r\n```python\r\nds = load_dataset(\r\n \"parquet\",\r\n data_files={\r\n \"train\": \"hdfs:///xxx*.parquet\",\r\n },\r\n streaming=True,\r\n storage_options={\r\n \"host\": \"hostname\"\r\n }\r\n)\r\n```\r\nNone of them work.\r\nThe first one does not work due to what this PR trying to fix, and the second one due to #7934.\r\n\r\nYes, `resolve_pattern` would be called like `resolve_pattern(\"hdfs://hostname/user/xxx\", ...)`, but its out put would be like `hdfs:///user/xxx`, no hostname in it. This output would be passed to later file operation like `fsspec.open()`. It needs the hostname in the url to find the HDFS cluster correctly.",
"@lhoestq \r\nHi! Is there any concern?🙃",
"I see, I think the path forward is to fix https://github.com/huggingface/datasets/issues/7934 which sounds like an actual xPath bug, while resolve_pattern dropping the hostname comes from fsspec HDFS implementation that we should probably try to follow",
"Fixing #7934 alone can solve my problem. \r\n\r\nBut I don't think fsspec intends to drop the hostname. Function `resolve_pattern` here is supposed to convert a pattern to absolute file paths, and keeping the protocol intouched. `fs.glob` just returns the absolute paths to files, of which no hostname should in the result. The problem is how the function `resolve_pattern` reconstructs the whole path, ignoring the HDFS hostname in the protocol.\r\n\r\nFrom another point of view, in `resolve_pattern` `fs.glob` is call with `hdfs://hostname/user/xxx` but latter `fs.open` is called with `hdfs:///user/xxx`, which is inconsistent.\r\n\r\n"
] | 2026-01-09T03:59:45
| 2026-01-15T04:06:17
| null |
NONE
| null | null | null | null |
For HDFS url with hostname like `hdfs://hostname/user/xxx`, the function `resolve_pattern` would drop the hostname, and outputs `hdfs:///user/xxx`. This may break later file operations by trying to connect to wrong HDFS cluster.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7935/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7935/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7935",
"html_url": "https://github.com/huggingface/datasets/pull/7935",
"diff_url": "https://github.com/huggingface/datasets/pull/7935.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7935.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7934
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7934/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7934/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7934/events
|
https://github.com/huggingface/datasets/issues/7934
| 3,792,642,445
|
I_kwDODunzps7iDyGN
| 7,934
|
xPath cannot handle hdfs:///xxxx properly
|
{
"login": "li-yi-dong",
"id": 73142299,
"node_id": "MDQ6VXNlcjczMTQyMjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/73142299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/li-yi-dong",
"html_url": "https://github.com/li-yi-dong",
"followers_url": "https://api.github.com/users/li-yi-dong/followers",
"following_url": "https://api.github.com/users/li-yi-dong/following{/other_user}",
"gists_url": "https://api.github.com/users/li-yi-dong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/li-yi-dong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/li-yi-dong/subscriptions",
"organizations_url": "https://api.github.com/users/li-yi-dong/orgs",
"repos_url": "https://api.github.com/users/li-yi-dong/repos",
"events_url": "https://api.github.com/users/li-yi-dong/events{/privacy}",
"received_events_url": "https://api.github.com/users/li-yi-dong/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-08T12:14:11
| 2026-01-08T12:14:11
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Describe the bug
_as_str('hdfs:///xxxx') would return hdfs://xxxx. Removing one / and making the path invalid.
For the use case like
```
ds = load_dataset(
"parquet",
data_files={
"train": "hdfs:///user/path/to/data/train*.parquet",
},
streaming=True,
storage_options={
"host": "hostname",
}
)
```
would get
```
File "/usr/local/lib/python3.11/site-packages/datasets/load.py", line 1511, in load_dataset
return builder_instance.as_streaming_dataset(split=split)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/datasets/builder.py", line 1193, in as_streaming_dataset
splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/datasets/packaged_modules/parquet/parquet.py", line 123, in _split_generators
with open(file, "rb") as f:
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/datasets/streaming.py", line 73, in wrapper
return function(*args, download_config=download_config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/datasets/utils/file_utils.py", line 963, in xopen
file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/fsspec/core.py", line 508, in open
out = open_files(
^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/fsspec/core.py", line 295, in open_files
fs, fs_token, paths = get_fs_token_paths(
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/fsspec/core.py", line 672, in get_fs_token_paths
chain = _un_chain(urlpath0, storage_options or {})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/fsspec/core.py", line 365, in _un_chain
kw = dict(
^^^^^
TypeError: dict() got multiple values for keyword argument 'host'
```
Due to the file passed to fsspec.open is hdfs://user/path/to/data/trainxxx.parquet, and fsspec would take user as the hostname
### Steps to reproduce the bug
<img width="992" height="148" alt="Image" src="https://github.com/user-attachments/assets/98e1dac2-e81b-4727-bf7a-55faaf0c8168" />
### Expected behavior
Keep all three /
### Environment info
datasets 4.4.2
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7934/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7933
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7933/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7933/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7933/events
|
https://github.com/huggingface/datasets/pull/7933
| 3,780,607,384
|
PR_kwDODunzps67fNaP
| 7,933
|
feat: Add Apache TsFile format support
|
{
"login": "sinanshamsudheen",
"id": 186699478,
"node_id": "U_kgDOCyDO1g",
"avatar_url": "https://avatars.githubusercontent.com/u/186699478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sinanshamsudheen",
"html_url": "https://github.com/sinanshamsudheen",
"followers_url": "https://api.github.com/users/sinanshamsudheen/followers",
"following_url": "https://api.github.com/users/sinanshamsudheen/following{/other_user}",
"gists_url": "https://api.github.com/users/sinanshamsudheen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sinanshamsudheen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sinanshamsudheen/subscriptions",
"organizations_url": "https://api.github.com/users/sinanshamsudheen/orgs",
"repos_url": "https://api.github.com/users/sinanshamsudheen/repos",
"events_url": "https://api.github.com/users/sinanshamsudheen/events{/privacy}",
"received_events_url": "https://api.github.com/users/sinanshamsudheen/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7933). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-05T08:28:12
| 2026-01-06T10:26:51
| null |
NONE
| null | null | null | null |
# Add Apache TsFile format support
Adds support for loading `.tsfile` datasets. Closes #7922.
## What's TsFile?
[Apache TsFile](https://tsfile.apache.org/) is a columnar time-series format popular in IoT. The TsFile community requested this integration and offered to help maintain it.
## What I did
Created a new `TsFile` builder in `packaged_modules/tsfile/` following the same pattern as HDF5. Registered the module and added `.tsfile` extension mapping. Also added `tsfile>=2.0.0` as an optional dependency.
The builder uses `tsfile.to_dataframe()` with iterator mode for memory-efficient reading, then converts to PyArrow tables. Schema is inferred automatically from file metadata.
## Config options
- `batch_size` - rows per batch (default 10000)
- `table_name` - which table to read (for multi-table files)
- `columns` - filter specific columns
- `start_time` / `end_time` - time-range filtering
## Usage
```python
from datasets import load_dataset
ds = load_dataset("tsfile", data_files=["data.tsfile"], split="train")
# with filtering
ds = load_dataset("tsfile", data_files=["data.tsfile"],
columns=["temperature"], start_time=1609459200000)
```
## Tests
Added 11 tests covering config validation, basic loading, data integrity, feature inference, and error handling. All passing.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7933/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7933/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7933",
"html_url": "https://github.com/huggingface/datasets/pull/7933",
"diff_url": "https://github.com/huggingface/datasets/pull/7933.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7933.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7932
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7932/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7932/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7932/events
|
https://github.com/huggingface/datasets/pull/7932
| 3,777,725,050
|
PR_kwDODunzps67WqhL
| 7,932
|
Fix duplicate keyword conflict in load_dataset_builder
|
{
"login": "Ashish570raj",
"id": 110705207,
"node_id": "U_kgDOBpk6Nw",
"avatar_url": "https://avatars.githubusercontent.com/u/110705207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ashish570raj",
"html_url": "https://github.com/Ashish570raj",
"followers_url": "https://api.github.com/users/Ashish570raj/followers",
"following_url": "https://api.github.com/users/Ashish570raj/following{/other_user}",
"gists_url": "https://api.github.com/users/Ashish570raj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ashish570raj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ashish570raj/subscriptions",
"organizations_url": "https://api.github.com/users/Ashish570raj/orgs",
"repos_url": "https://api.github.com/users/Ashish570raj/repos",
"events_url": "https://api.github.com/users/Ashish570raj/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ashish570raj/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi HuggingFace team\r\nThis PR fixes issue #4910 by safely merging builder_kwargs and config_kwargs to avoid duplicate keyword errors. \r\nA regression test is included to ensure this does not happen again. \r\n\r\nPlease let me know if you’d like any changes. Thanks!\r\n"
] | 2026-01-03T05:49:06
| 2026-01-03T05:52:02
| null |
NONE
| null | null | null | null |
Fixes #4910
This PR fixes a bug where passing the same keyword in builder_kwargs and
config_kwargs caused a TypeError in load_dataset_builder.
The kwargs are now merged safely so config_kwargs override builder_kwargs
without duplication. A regression test is added to prevent this from
happening again.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7932/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7932",
"html_url": "https://github.com/huggingface/datasets/pull/7932",
"diff_url": "https://github.com/huggingface/datasets/pull/7932.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7932.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7931
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7931/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7931/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7931/events
|
https://github.com/huggingface/datasets/issues/7931
| 3,777,662,799
|
I_kwDODunzps7hKo9P
| 7,931
|
Enable CORS + HTTP Range support for browser partial reads on cas-bridge.xethub.hf.co (Parquet row-group access)
|
{
"login": "cornhundred",
"id": 8352840,
"node_id": "MDQ6VXNlcjgzNTI4NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8352840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cornhundred",
"html_url": "https://github.com/cornhundred",
"followers_url": "https://api.github.com/users/cornhundred/followers",
"following_url": "https://api.github.com/users/cornhundred/following{/other_user}",
"gists_url": "https://api.github.com/users/cornhundred/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cornhundred/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cornhundred/subscriptions",
"organizations_url": "https://api.github.com/users/cornhundred/orgs",
"repos_url": "https://api.github.com/users/cornhundred/repos",
"events_url": "https://api.github.com/users/cornhundred/events{/privacy}",
"received_events_url": "https://api.github.com/users/cornhundred/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null |
[
"Cc @assafvayner maybe ? or @cfahlgren1 @severo if you've already encountered this ?",
"OK, reproduced with hyparquet on https://huggingface.co/spaces/hyperparam/hyperparam, see https://huggingface.co/spaces/hyperparam/hyperparam?url=https%3A%2F%2Fhuggingface.co%2Fdatasets%2Ffacebook%2Fresearch-plan-gen%2Fblob%2Frefs%2Fconvert%2Fparquet%2Farxiv%2Ftest%2F0000.parquet for example\n\nError message:\n\n```\nCross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://cas-bridge.xethub.hf.co/xet-bridge-us/695138b5329f4825326ac6c8/8e12ca920d225791200b59843ddb8e469b1c3c59f92cc2a9ffdec88b16ca00f6?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=...&X-Amz-Date=20260109T124456Z&X-Amz-Expires=3600&X-Amz-Signature=...\n```\n\nNote that it works in https://hyparam.github.io/demos/hyparquet/?key=https%3A%2F%2Fhuggingface.co%2Fdatasets%2Ffacebook%2Fresearch-plan-gen%2Fresolve%2Frefs%252Fconvert%252Fparquet%2Farxiv%2Ftest%2F0000.parquet, which uses a more recent version of hyparquet.\n\nSo, my guess is:\n\n- HEAD is forbidden in the S3 bucket (clients use HEAD to get the Parquet file size),\n- a possible fix, on client side, is to use GET with a 0-byte length: https://github.com/hyparam/hyparquet/pull/137.\n\nOn HF side, we should allow HEAD on signed URLs",
"cc @coyotte508 too, just in case",
"Possibly, the solution is to add `HEAD` in `AllowedMethods` in the S3 bucket configuration, as here:\n\n```json\n[\n {\n \"AllowedHeaders\": [\n \"*\"\n ],\n \"AllowedMethods\": [\n \"HEAD\",\n \"GET\"\n ],\n \"AllowedOrigins\": [\n \"*\"\n ],\n \"ExposeHeaders\": [\n \"Content-Range\",\n \"ETag\",\n \"x-amz-checksum-crc32\"\n ]\n }\n]\n```",
"cc @assafvayner @rajatarya @Hugoch for viz / fix xet-side\n\na very annoying \"feature\" with S3 is that presigned GET / HEAD urls aren't compatible with each other, eg a presigned GET can't do HEAD calls, which led to a host of issues and hacks on our side. We even escalated to AWS a few times, without success.\n\nNote that Cloudfront is ok, we can use a presigend GET url for HEAD calls with cloudfront, just not with S3 directly.\n\nSince the CAS bridge is not an actual S3 bucket, I would love for it to be able to respond to HEAD requests with a presigned GET url.\n\n(note: maybe it's just a CORS issue and not a presigned URL issue 🤷 )",
"Thanks for reporting this @cornhundred - we (Xet team) will take on the changes to add appropriate CORS headers to our Bridge service to enable this use case.\n\n@severo : Do the repro steps still work for you? I don't see any errors when I go to https://huggingface.co/spaces/hyperparam/hyperparam?url=https%3A%2F%2Fhuggingface.co%2Fdatasets%2Ffacebook%2Fresearch-plan-gen%2Fblob%2Frefs%2Fconvert%2Fparquet%2Farxiv%2Ftest%2F0000.parquet\n\nEither @cornhundred or @severo can you give me repro steps so we can be sure we've got this fixed correctly?",
"And I've filed https://linear.app/xet/issue/XET-815/bridge-add-cors-headers-to-support-parquet-range-reads to track this issue on the Xet side. I'll keep updating this GH issue with progress, but this way we won't lose track of this.",
"It looks like the bug has been fixed indeed. The HEAD request returns 200 and the response is used by the JS client.",
"> It looks like the bug has been fixed indeed. The HEAD request returns 200 and the response is used by the JS client.\n\nWell now I'm confused, because I'm pretty sure we didn't change/deploy anything on the Xet side related to this.",
"hmm, me too. I cannot reproduce the issue.\n\nHere is a screenshot of the HEAD request, which was an error 3 days ago:\n\n<img width=\"1839\" height=\"1718\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/a2e49839-8ca5-405b-a7a4-c26489e9b417\" />\n\nThe response headers:\n\n```\nHTTP/1.1 200 OK\nContent-Length: 4086406\nConnection: keep-alive\nContent-Disposition: inline; filename*=UTF-8''0000.parquet; filename=\"0000.parquet\";\nCache-Control: public, max-age=31536000\nETag: \"8e12ca920d225791200b59843ddb8e469b1c3c59f92cc2a9ffdec88b16ca00f6\"\naccess-control-allow-origin: *\naccess-control-allow-headers: Content-Range, Content-Type, Content-Disposition, ETag\naccess-control-expose-headers: Accept-Ranges, Content-Range, Content-Type, Content-Disposition, ETag, X-Cache\nAccept-Ranges: bytes\nx-request-id: 01KEHD46KW533MGPAK7BXHY336\nDate: Fri, 09 Jan 2026 12:53:56 GMT\nX-Cache: Hit from cloudfront\nVia: 1.1 07cb86faf6a141962da4e2d7c85db038.cloudfront.net (CloudFront)\nX-Amz-Cf-Pop: CDG52-P1\nX-Amz-Cf-Id: jRjoc9PHoiv60CnDZfF6ZXc7Kwhbxc5DxtytvFpUT-EomFPDN1OjFw==\nAge: 270414\nContent-Security-Policy: default-src 'none'; sandbox\n```",
"Ah, this could no longer repro because now Cloudfront has cached this request - so the HEAD request to Cloudfront responds as expected.\n\nThe original issue is on Xet Bridge service (cas-bridge.xethub.hf.co) - maybe the issue remains that Bridge service doesn't have the appropriate CORS headers to support this request.",
"Also, I just tried with one of my private datasets. Not sure if it's related, on this URL I get an error, not with HEAD, but with the OPTIONS request.\n\n```\nXHR OPTIONS https://cas-bridge.xethub.hf.co/xet-bridge-us/655df24cde919d4162341a19/09ed3e86bf64d019919194d776abaa53b14acae6701129bb09f6169041b43f92?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas/20260112/us-east-1/s3/aws4_request&X-Amz-Date=20260112T160451Z&X-Amz-Expires=3600&X-Amz-Signature=4562a3c72667a4bedc056c87e75a2810b52b682ebe18f7ff5b6c8d4ab081cc38&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=60a76b174e24361791fe822d&response-content-disposition=inline;+filename*=UTF-8''0000.parquet;+filename=\"0000.parquet\";&x-id=GetObject&Expires=1768237491&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc2ODIzNzQ5MX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82NTVkZjI0Y2RlOTE5ZDQxNjIzNDFhMTkvMDllZDNlODZiZjY0ZDAxOTkxOTE5NGQ3NzZhYmFhNTNiMTRhY2FlNjcwMTEyOWJiMDlmNjE2OTA0MWI0M2Y5MioifV19&Signature=SsYstbIroSWXjARDLcHQyENDwkeq0l~Nsu9RvDA-x82YSnnd8dGa0wAlYiAS2STomKmZmDtTwti3RZ4Lha2dCSnNwqHJPIiF8jFFv4h5IDXm1VKzdh~14tmA1TNfpdwSdkWCpPxTgxY2kUeOJ-qldoY20Cp9K6G-GWanKYYRft~q9mlJy5E~l-CaXnRs1PBFRPj6sci-G0aCwXtjbBjUZCg2z4--e~uwLNKJHHVeDe3wUC~GNblRNwLm-EYTzINbfpm99g3t8wHRKAQJxiXZcVUsFqcULOLiVps2NPblzJNi9Y0SCgx3buEmtm~HkZ4IsjUTJj337y4MOpmZVSEvVg__&Key-Pair-Id=K2L8F4GPSG1IFC CORS Preflight Did Not Succeed\n...\nCross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://cas-bridge.xethub.hf.co/xet-bridge-us/655df24cde919d4162341a19... \n```",
"Thanks @rajatarya, here is a link to a [Google Colab notebook](https://colab.research.google.com/drive/15soyg7g3CCdlBMDDcljeiVsq_yjeJREJ#scrollTo=bG5PfGTBK7wU) where the issue can be reproduced. The notebook tries to access a Hugging Face dataset on the front-end and it only works if we set up a proxy server to avoid the CORs issues. Otherwise we see this error in the console:\n\n```\noutputframe.html?vrz=colab-external_20260109-060046_RC02_854292615:1 Access to fetch at 'https://cas-bridge.xethub.hf.co/xet-bridge-us/695b406827b0975343f6a1a2/7f3468fcbc686cda54ec68dda14cc5f677402e9e0f540772f770b96b4a687916?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20260112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260112T215017Z&X-Amz-Expires=3600&X-Amz-Signature=eb126a50a8f9da1bc80110f0408f1e25b764d5d71d30502f13ece6469b487921&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%27chunk_0.parquet%3B+filename%3D%22chunk_0.parquet%22%3B&x-id=GetObject&Expires=1768258217&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc2ODI1ODIxN319LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82OTViNDA2ODI3YjA5NzUzNDNmNmExYTIvN2YzNDY4ZmNiYzY4NmNkYTU0ZWM2OGRkYTE0Y2M1ZjY3NzQwMmU5ZTBmNTQwNzcyZjc3MGI5NmI0YTY4NzkxNioifV19&Signature=fZCXLc5VxfW8xKDcKK2R4CbPAVNAUgnMUjBrcrGNYebOaDMrsyjqs5bgKUcI9P67jYGLTowltxypbKrlA8UDfwKFtdT9GUtQXjAUNY%7ENjoKUacJXFuJtBeucdef5dnRu%7E4HC%7ECz2NJu69qCh0QNGNk9BH0D-83CptPhNuUHGZ%7EgT9F%7Ehe5RZR3bTSNg-6K6DSqx3JtxUu4-P5ZWSqwq4SuqAatm0019euel2wCciWc7HbYQ3b2XXkQAjLXvgLpuP-y-2JOkvG3SDJXoPamzH-wkKmBTdmCLxw%7ENHLi3F9w%7EfpSWZMn-61KR48g5E9LahtGPbRvti8Nvs-qES441DXA__&Key-Pair-Id=K2L8F4GPSG1IFC' (redirected from 'https://huggingface.co/datasets/cornhundred/Celldega_Xenium_Prime_Ovarian_Cancer_FFPE_XRun_outs_row_groups_image_chunk/resolve/main/Xenium_Prime_Ovarian_Cancer_FFPE_XRrun_outs/transcripts/chunk_0.parquet') from origin 'https://fjy31o3wnuf-496ff2e9c6d22116-0-colab.googleusercontent.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: It does not have HTTP ok status.\n```"
] | 2026-01-03T04:23:54
| 2026-01-12T21:51:41
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Feature request
## Summary
Browser-based data tools need Range requests to read Parquet efficiently (footer + selected row groups). Downloads from the Hub redirect to cas-bridge.xethub.hf.co (Xet bridge). The redirected host fails CORS preflight for Range/HEAD workflows, blocking partial reads. ([Hugging Face](https://huggingface.co/blog/migrating-the-hub-to-xet)). See example [HuggingFace dataset](https://huggingface.co/datasets/cornhundred/Xenium_V1_human_Pancreas_FFPE_outs_row_groups/tree/main/Xenium_V1_human_Pancreas_FFPE_outs_row_groups)
## Current behavior
Plain GET works via redirect.
Range workflows fail with: “Response to preflight request doesn’t pass access control check: It does not have HTTP ok status.”
This blocks parquet-wasm and DuckDB-Wasm style readers which rely on HEAD + Range or non-safelisted Range patterns. ([GitHub](https://github.com/duckdb/duckdb-wasm/issues/1852))
## Expected behavior
OPTIONS to the final redirected host returns 200/204 (no redirect) with appropriate CORS headers. Preflight responses must be “ok” status. ([GitHub](https://github.com/whatwg/fetch/issues/1588))
GET with Range returns 206 Partial Content and includes CORS headers, plus exposes Content-Range, Accept-Ranges, and Content-Length so browser JS can consume them. ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Content-Range))
## Proposed CORS headers (public, anonymous files)
For responses from cas-bridge.xethub.hf.co (and any sibling Xet bridge hosts):
### Preflight (OPTIONS)
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, HEAD, OPTIONS
Access-Control-Allow-Headers: Range, Content-Type (or echo Access-Control-Request-Headers)
Access-Control-Max-Age: 86400 (optional, reduces preflight spam)
### Actual (GET/HEAD, including 206)
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Content-Range, Accept-Ranges, Content-Length
Ensure Accept-Ranges: bytes and Content-Range are present for range responses. ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Accept-Ranges))
### Notes on credentials (optional)
If any endpoint requires credentials, wildcard * cannot be used and the server must echo Origin and add Vary: Origin. ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Access-Control-Allow-Origin))
## Impact
This unblocks efficient browser analytics and visualization on HF-hosted datasets using Parquet row groups, DuckDB-Wasm, parquet-wasm, and similar tooling. DuckDB-Wasm documentation explicitly notes that remote data access requires correct CORS on the hosting site. ([DuckDB](https://duckdb.org/docs/stable/clients/wasm/extensions.html))
High-quality references worth linking in the issue thread
Hugging Face: redirect to cas-bridge.xethub.hf.co shown in Xet migration blog ([Hugging Face](https://huggingface.co/blog/migrating-the-hub-to-xet))
Fetch/CORS: preflight must be “ok” status (200/204) ([GitHub](https://github.com/whatwg/fetch/issues/1588))
Fetch/CORS: redirect + preflight is a known sharp edge ([GitHub](https://github.com/whatwg/fetch/issues/204))
MDN CORS guide: Range safelist caveat ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CORS))
MDN Range header: single-range is safelisted, multi-range may preflight ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Range))
MDN Expose-Headers: non-safelisted headers must be exposed ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Access-Control-Expose-Headers))
DuckDB-Wasm: remote HTTPFS requires correct CORS ([DuckDB](https://duckdb.org/docs/stable/clients/wasm/extensions.html))
DuckDB-Wasm issue: HEAD blocked by CORS breaks the pipeline ([GitHub](https://github.com/duckdb/duckdb-wasm/issues/1852))
pdf.js historical issues about Accept-Ranges/Content-Range exposure ([GitHub](https://github.com/mozilla/pdf.js/issues/3150))
## Summary
Your request is standard: browser Parquet needs byte ranges.
Redirect to cas-bridge.xethub.hf.co makes CORS enforcement happen on the Xet bridge host. ([Hugging Face](https://huggingface.co/blog/migrating-the-hub-to-xet))
Fix requires: OPTIONS returns 200/204 with CORS headers, and 206 responses include CORS + exposed headers. ([GitHub](https://github.com/whatwg/fetch/issues/1588))
Similar failures exist across pdf.js and DuckDB-Wasm ecosystems. ([GitHub](https://github.com/duckdb/duckdb-wasm/issues/1852))
### Motivation
I would like to be able to read subsets of large Parquet files using range requests using the parquet_wasm library on the front end. This is being used as part of a spatial data visualization project https://github.com/broadinstitute/celldega
### Your contribution
I would be happy to provide code to make front-end range requests as an example.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7931/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7931/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7930
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7930/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7930/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7930/events
|
https://github.com/huggingface/datasets/pull/7930
| 3,777,628,848
|
PR_kwDODunzps67WYwc
| 7,930
|
Proposal: Protein 3D Structure Visualization for Dataset Viewer
|
{
"login": "behroozazarkhalili",
"id": 80390531,
"node_id": "MDQ6VXNlcjgwMzkwNTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/behroozazarkhalili",
"html_url": "https://github.com/behroozazarkhalili",
"followers_url": "https://api.github.com/users/behroozazarkhalili/followers",
"following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}",
"gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}",
"starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions",
"organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs",
"repos_url": "https://api.github.com/users/behroozazarkhalili/repos",
"events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}",
"received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"cc @georgia-hf - Following up on your question about protein visualization for the Dataset Viewer. This proposal recommends 3Dmol.js (~150KB gzipped) as a lightweight alternative to Mol* (~1.3MB gzipped).\n\nLooking forward to your feedback!",
"Exciting ! cc @cfahlgren1 @severo for the Viewer part\r\n\r\nFor the `datasets` part I'll leave my feedbacks in the PRs :)",
"I don't know the JS libraries, but indeed, the lighter the better, as we don't require advanced features.",
"From a quick look at the PDB and mmCIF PRs I noticed that the dataset has one row = one atom. However I humbly believe that such datasets would be more practical to use if one row = one structure. This way each row is independent, which is practical in ML to perform train/test splits or dataset shuffling.\r\n\r\nThis would also make it easier to add labels and metadata for each structure, similar to what we already for images. E.g. you could group them per folder named after a label, or you can have a metadata.parquet file to add custom metadata per structure.\r\n\r\nAnd this way in the Viewer it could show one 3D render per row.\r\n\r\nWhat do you think ?",
"@lhoestq @severo @georgia-hf I will be waiting for all your comments; then, I will start implementing the final plan. ",
"adding some remarks from @0gust1 (feel free to add comments here!):\r\n\r\n- NGLViewer works very well, and is used in production at [mabsilico](https://github.com/mabsilico)\r\n- Molstar is big, but modular, so: it must be possible to use it without requiring the whole 1.3MB. Beware that the API and engine might lack documentation, so... some deep dive in the code might be needed.\r\n"
] | 2026-01-03T03:30:01
| 2026-01-26T16:17:25
| null |
NONE
| null | null | null | null |
# Proposal: Protein 3D Structure Visualization for HuggingFace Dataset Viewer
## Executive Summary
This proposal outlines adding 3D protein structure visualization to the HuggingFace Dataset Viewer, enabling users to interactively view PDB and mmCIF molecular structures directly within the dataset preview interface.
---
## Data Type Support (Updated Architecture)
**Supported formats** (from recent PRs):
- **PDB** (PR #7926): `.pdb`, `.ent` extensions via `PdbFolder` builder
- **mmCIF** (PR #7925): `.cif`, `.mmcif` extensions via `MmcifFolder` builder
**New Implementation Pattern (One Row = One Structure)**:
Both PRs have been refactored to follow the **ImageFolder pattern**, where each row in the dataset contains one complete protein structure file. This is the recommended ML-friendly approach:
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("mmcif", data_dir="./structures")
>>> dataset[0]
{'structure': 'data_1ABC\n_entry.id 1ABC\n_atom_site...'} # Complete mmCIF content
>>> from datasets import load_dataset
>>> dataset = load_dataset("pdb", data_dir="./pdbs")
>>> dataset[0]
{'structure': 'HEADER PROTEIN 01-JAN-20 1ABC\nATOM...'} # Complete PDB content
```
**Key Components**:
- **ProteinStructure feature type**: New feature type supporting both PDB and mmCIF formats with lazy loading
- **PdbFolder builder** (PR #7926): Folder-based loader for PDB files with label and metadata support
- **MmcifFolder builder** (PR #7925): Folder-based loader for mmCIF files with label and metadata support
**What gets visualized**:
- 3D atomic coordinates (x, y, z)
- Chain structures
- Residue information
- Atom types and elements
- Secondary structure (helices, sheets)
**Not applicable** (1D sequence only):
- FASTA (PR #7923) - text sequences, no 3D coordinates
- FASTQ (PR #7924) - sequences with quality scores, no 3D coordinates
---
## Visualization Library Comparison
| Library | Bundle Size (minified) | Bundle Size (gzipped) | License | Pros | Cons |
|---------|------------------------|----------------------|---------|------|------|
| **3Dmol.js** | 512 KB | **~150 KB** | BSD-3 | Lightweight, easy integration, good docs | Fewer advanced features |
| **NGL Viewer** | 1.3 MB | ~350 KB | MIT | Excellent MMTF support, beautiful rendering | Moderate complexity |
| **Mol*** | 4.6 MB | ~1.3 MB | MIT | Industry standard, used by RCSB PDB, feature-rich | Heavy, complex |
| **PDBe Molstar** | 5.8 MB | ~1.6 MB | Apache 2.0 | EMBL-EBI maintained, simpler Mol* wrapper | Still very heavy |
*Bundle sizes verified by downloading actual distribution files from npm/CDN (January 2026)*
---
## Recommendation: 3Dmol.js
**Primary choice**: 3Dmol.js
**Rationale**:
1. **Bundle size**: ~150 KB gzipped - the lightest option by far, ideal for lazy loading
2. **Simple API**: Easy to integrate with React/Next.js
3. **BSD-3 License**: Compatible with HuggingFace licensing
4. **Active maintenance**: Regular updates, good community support
5. **Format support**: Native PDB and mmCIF parsing built-in
6. **Sufficient features**: Rotation, zoom, style switching (cartoon, stick, sphere)
**Why not Mol*?** As Georgia noted, Mol* is heavy (~1.3 MB gzipped). While it's the industry standard for RCSB PDB, it's overkill for a dataset preview where users just need to verify structure data looks correct.
**Alternative for power users**: If users need advanced features like density maps, ligand interactions, or sequence alignment overlay, consider PDBe Molstar as an optional "full viewer" mode.
---
## Summary
**Recommended approach**:
- Use **3Dmol.js** (~150 KB gzipped) with **lazy loading**
- Only loads when user views PDB/mmCIF datasets
- Simple integration, BSD-3 license, active community support
**Backend implementation** (Updated):
- PR #7925 (mmCIF): Uses **MmcifFolder** builder with **ProteinStructure** feature type
- PR #7926 (PDB): Uses **PdbFolder** builder with **ProteinStructure** feature type
- Both follow the **one-row-per-structure** pattern (like ImageFolder)
- Each row's `structure` column contains the complete file content ready for 3D rendering
---
## Next Steps
1. Get feedback on this proposal
2. Create proof-of-concept in a standalone demo if needed
3. Integrate into dataset-viewer once approach is approved
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7930/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7930",
"html_url": "https://github.com/huggingface/datasets/pull/7930",
"diff_url": "https://github.com/huggingface/datasets/pull/7930.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7930.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7929
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7929/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7929/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7929/events
|
https://github.com/huggingface/datasets/pull/7929
| 3,776,098,655
|
PR_kwDODunzps67Rayd
| 7,929
|
Raise early for invalid `revision` in `load_dataset`
|
{
"login": "Scott-Simmons",
"id": 52365471,
"node_id": "MDQ6VXNlcjUyMzY1NDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/52365471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Scott-Simmons",
"html_url": "https://github.com/Scott-Simmons",
"followers_url": "https://api.github.com/users/Scott-Simmons/followers",
"following_url": "https://api.github.com/users/Scott-Simmons/following{/other_user}",
"gists_url": "https://api.github.com/users/Scott-Simmons/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Scott-Simmons/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Scott-Simmons/subscriptions",
"organizations_url": "https://api.github.com/users/Scott-Simmons/orgs",
"repos_url": "https://api.github.com/users/Scott-Simmons/repos",
"events_url": "https://api.github.com/users/Scott-Simmons/events{/privacy}",
"received_events_url": "https://api.github.com/users/Scott-Simmons/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Passes\r\n```sh\r\npytest -k \"LoadTest and test_load_dataset_invalid_revision_with_cache\"\r\n```\r\n\r\nFails\r\n```sh\r\ngit checkout cc2399019a3a547ebc31ec68a1ff99abd4ec93ce\r\npytest -k \"LoadTest and test_load_dataset_invalid_revision_with_cache\"\r\n```\r\n\r\nRan `make test`, but failures look unrelated to the PR diff (same tests fail on `main` too)\r\n\r\n```sh\r\nFAILED tests/test_distributed.py::test_torch_distributed_run[False] - TypeError: Passing coroutines is forbidden...\r\nFAILED tests/test_distributed.py::test_torch_distributed_run[True] - TypeError: Passing coroutines is forbidden...\r\nFAILED tests/test_distributed.py::test_torch_distributed_run_streaming_with_num_workers[2-2] - TypeError: Passing coroutines is forbidden...\r\nFAILED tests/test_distributed.py::test_torch_distributed_run_streaming_with_num_workers[3-2] - TypeError: Passing coroutines is forbidden...\r\n= 4 failed, 3077 passed, 18 skipped, 491 warnings in 556.45s (0:09:16) =\r\nmake: *** [Makefile:20: test] Error 1\r\n```",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7929). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-02T10:40:49
| 2026-01-09T11:08:44
| 2026-01-09T11:08:43
|
CONTRIBUTOR
| null | null | null | null |
Solves https://github.com/huggingface/datasets/issues/7928
Raise early for invalid revisions
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7929/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7929",
"html_url": "https://github.com/huggingface/datasets/pull/7929",
"diff_url": "https://github.com/huggingface/datasets/pull/7929.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7929.patch",
"merged_at": "2026-01-09T11:08:43"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7928
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7928/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7928/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7928/events
|
https://github.com/huggingface/datasets/issues/7928
| 3,775,842,185
|
I_kwDODunzps7hDseJ
| 7,928
|
`load_dataset` `revision` param not respected when fetching from cache
|
{
"login": "Scott-Simmons",
"id": 52365471,
"node_id": "MDQ6VXNlcjUyMzY1NDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/52365471?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Scott-Simmons",
"html_url": "https://github.com/Scott-Simmons",
"followers_url": "https://api.github.com/users/Scott-Simmons/followers",
"following_url": "https://api.github.com/users/Scott-Simmons/following{/other_user}",
"gists_url": "https://api.github.com/users/Scott-Simmons/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Scott-Simmons/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Scott-Simmons/subscriptions",
"organizations_url": "https://api.github.com/users/Scott-Simmons/orgs",
"repos_url": "https://api.github.com/users/Scott-Simmons/repos",
"events_url": "https://api.github.com/users/Scott-Simmons/events{/privacy}",
"received_events_url": "https://api.github.com/users/Scott-Simmons/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"This might be better placed as a feature request not a bug, since the logging `Using the latest cached version of the dataset since sentientfutures/ahb couldn't be found on the Hugging Face Hub` is clear.",
"https://github.com/huggingface/datasets/pull/7929 This only solves the case of invalid revisions. Fetching a specific revision from the cache would be more work but I think this is a good start and solves issues like https://github.com/UKGovernmentBEIS/inspect_evals/pull/834#issuecomment-3704689637"
] | 2026-01-02T08:20:47
| 2026-01-07T07:50:40
| null |
CONTRIBUTOR
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Describe the bug
`datasets.load_dataset` `revision` semantics are a bit inconsistent when the dataset is not found on the huggingface hub. When fetching the latest cached version of the dataset, the `revision` argument is ignored, so long as any cached versions of the dataset already exist in the HF cache.
### Steps to reproduce the bug
```python
import datasets
datasets.load_dataset(
"sentientfutures/ahb",
"dimensions",
split="train",
revision="main"
)
# would expect some error to raise here
datasets.load_dataset(
"sentientfutures/ahb",
"dimensions",
split="train",
revision="invalid_revision"
)
```
### Expected behavior
On the second call to `datasets.load_dataset` in the 'steps to reproduce the bug' example, expect something like:
```sh
raise DatasetNotFoundError(
datasets.exceptions.DatasetNotFoundError: Revision 'invalid_revision' doesn't exist for dataset 'sentientfutures/ahb' on the Hub.
```
### Environment info
- `datasets` version: 4.4.1
- Platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.37
- Python version: 3.12.12
- `huggingface_hub` version: 0.36.0
- PyArrow version: 22.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2025.9.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7928/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7927
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7927/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7927/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7927/events
|
https://github.com/huggingface/datasets/issues/7927
| 3,775,302,438
|
I_kwDODunzps7hBosm
| 7,927
|
Using Stateful Dataloader with Split Dataset By Node and DCP for DDP
|
{
"login": "conceptofmind",
"id": 25208228,
"node_id": "MDQ6VXNlcjI1MjA4MjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/25208228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/conceptofmind",
"html_url": "https://github.com/conceptofmind",
"followers_url": "https://api.github.com/users/conceptofmind/followers",
"following_url": "https://api.github.com/users/conceptofmind/following{/other_user}",
"gists_url": "https://api.github.com/users/conceptofmind/gists{/gist_id}",
"starred_url": "https://api.github.com/users/conceptofmind/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/conceptofmind/subscriptions",
"organizations_url": "https://api.github.com/users/conceptofmind/orgs",
"repos_url": "https://api.github.com/users/conceptofmind/repos",
"events_url": "https://api.github.com/users/conceptofmind/events{/privacy}",
"received_events_url": "https://api.github.com/users/conceptofmind/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Does it need to be pickled?\n\n```python\n def load_state_dict(self, state_dict):\n hf_state = pickle.loads(state_dict[\"data\"])\n self.train_dataset.load_state_dict(hf_state)\n\n def state_dict(self):\n return {\"data\": pickle.dumps(self.train_dataset.state_dict())}\n```",
"Pickling seems to have resolved the issue but it is not clear at all to me why this is necessary",
"> Does it need to be pickled?\n> \n> def load_state_dict(self, state_dict):\n> hf_state = pickle.loads(state_dict[\"data\"])\n> self.train_dataset.load_state_dict(hf_state)\n> \n> def state_dict(self):\n> return {\"data\": pickle.dumps(self.train_dataset.state_dict())}\n\nHii, your pickling solution is the correct approach. \n \nBecause DCP saves each key in the state dict as a separate file. When you call self.train_dataset.state_dict(), it returns a deeply nested dict with keys like examples_iterable.examples_iterable.previous_state. DCP flattens these into separate entries, but on load the structure doesn't match what StatefulDataLoader expects, causing the \"Missing key\" error. \n \nBy pickling, you serialize the entire nested state into a single bytes value that DCP treats as one atomic piece of data. \nYour implementation is correct. This is not a bug but a format incompatibility between HuggingFace's nested state dicts and DCP's flat key structure. Pickling is the standard workaround for this. ",
"Ai response"
] | 2026-01-01T22:27:07
| 2026-02-01T06:13:30
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Describe the bug
I am trying to determine how to save and load the Stateful Dataloader State with DCP and Split Dataset by Node for DDP.
Currently, I am running into the issue where I am receiving a slow resume.
```
Neither dataset nor iter(dataset) defines state_dict/load_state_dict so we are naively fast-forwarding your dataset by 5000 steps. For more efficient resumes, please implement `state_dict` and `load_state_dict` in your IterableDataset and/or iterator.
```
### Steps to reproduce the bug
Say we have a streaming dataset:
```python
class StreamingDataset(IterableDataset):
def __init__(
self,
path: str,
tokenizer: AutoTokenizer,
name: Optional[str] = None,
split: str = "train",
max_length: int = 2048,
ddp_rank: int = 0,
ddp_world_size: int = 1,
):
dataset = load_dataset(path, name, split=split, streaming=True)
self.train_dataset = split_dataset_by_node(
dataset=dataset, rank=ddp_rank, world_size=ddp_world_size
)
self.tokenizer = tokenizer
self.max_length = max_length
def __iter__(self):
for sample in iter(self.train_dataset):
tokenized = self.tokenizer(
sample["text"],
padding="max_length",
truncation=True,
max_length=self.max_length,
return_special_tokens_mask=True,
)
yield tokenized
```
We load that dataset into the Stateful Dataloader:
```python
trainloader = StatefulDataLoader(
dataset=train_dataset,
batch_size=args.batch_size,
collate_fn=data_collator,
)
```
We then have code for checkpointing and resuming the state using DCP:
```python
import os
from typing import Optional
import torch
import torch.distributed as dist
import torch.distributed.checkpoint as dcp
from torch.distributed.checkpoint.format_utils import dcp_to_torch_save
from torch.distributed.checkpoint.state_dict import get_state_dict, set_state_dict
from blitzbert.utils import print_rank_0
class Checkpoint:
def __init__(
self,
model: torch.nn.Module,
optimizer: torch.optim.Optimizer,
trainloader,
step: Optional[int] = None,
epoch: Optional[int] = None,
):
self.model = model
self.optimizer = optimizer
self.trainloader = trainloader
self.step = step
self.epoch = epoch
def get_state_dict(self) -> dict:
model_state_dict, optimizer_state_dict = get_state_dict(
self.model, self.optimizer
)
return {
"model": model_state_dict,
"optim": optimizer_state_dict,
"trainloader": self.trainloader.state_dict(),
"step": self.step,
"epoch": self.epoch,
}
def save_checkpoint(
args,
model,
optimizer,
trainloader,
step: Optional[int] = None,
epoch: Optional[int] = None,
final_checkpoint: bool = False,
):
checkpointer = Checkpoint(
model=model,
optimizer=optimizer,
trainloader=trainloader,
step=step,
epoch=epoch,
)
state_dict = checkpointer.get_state_dict()
if final_checkpoint:
print_rank_0("Saving final model")
save_path = os.path.join(args.checkpoint_dir, "final_model")
dcp.save(state_dict, checkpoint_id=save_path)
dist.barrier()
single_file_path = os.path.join(args.checkpoint_dir, "final_checkpoint.pth")
dcp_to_torch_save(save_path, single_file_path)
else:
if step % args.checkpointing_steps == 0 and step != 0:
print_rank_0(f"Saving model at step: {step}")
save_path = os.path.join(args.checkpoint_dir, f"epoch_{epoch}_step_{step}")
dcp.save(state_dict, checkpoint_id=save_path)
dist.barrier()
def load_checkpoint(args, model, optimizer, trainloader):
if not args.resume_from_checkpoint:
return 0, 0
checkpoint_path = args.resume_from_checkpoint
print_rank_0(f"Resumed from checkpoint: {checkpoint_path}")
checkpointer = Checkpoint(
model=model,
optimizer=optimizer,
trainloader=trainloader,
)
state_dict = checkpointer.get_state_dict()
dcp.load(
state_dict=state_dict,
checkpoint_id=checkpoint_path,
)
set_state_dict(
model,
optimizer,
model_state_dict=state_dict["model"],
optim_state_dict=state_dict["optim"],
)
trainloader.load_state_dict(state_dict["trainloader"])
step = state_dict["step"]
epoch = state_dict["epoch"]
return step, epoch
```
and then loading the checkpoint:
```python
completed_steps, current_epoch = load_checkpoint(
args=args, model=model, optimizer=optimizer, trainloader=trainloader
)
```
### Expected behavior
If I implement what the warning says:
```python
def state_dict(self):
return self.train_dataset.state_dict()
def load_state_dict(self, state):
self.train_dataset.load_state_dict(state)
```
I then get:
```
[rank0]: raise RuntimeError(f"Missing key in checkpoint state_dict: {fqn}.")
[rank0]: RuntimeError: Missing key in checkpoint state_dict: trainloader.dataset_state.examples_iterable.examples_iterable.previous_state.
```
How exactly should one be saving and resuming the Stateful Dataloader with Hugging Face datasets?
### Environment info
"datasets>=4.4.1",
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7927/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7927/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7926
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7926/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7926/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7926/events
|
https://github.com/huggingface/datasets/pull/7926
| 3,773,696,472
|
PR_kwDODunzps67Jxxz
| 7,926
|
Add lightweight PDB (Protein Data Bank) file support
|
{
"login": "behroozazarkhalili",
"id": 80390531,
"node_id": "MDQ6VXNlcjgwMzkwNTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/behroozazarkhalili",
"html_url": "https://github.com/behroozazarkhalili",
"followers_url": "https://api.github.com/users/behroozazarkhalili/followers",
"following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}",
"gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}",
"starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions",
"organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs",
"repos_url": "https://api.github.com/users/behroozazarkhalili/repos",
"events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}",
"received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-31T21:01:04
| 2026-01-09T19:22:17
| null |
NONE
| null | null | null | null |
## Summary
This PR adds support for loading PDB (Protein Data Bank) files with `load_dataset()`, following the **ImageFolder pattern** where **one row = one structure**.
Based on feedback from @lhoestq in #7930, this approach makes datasets more practical for ML workflows:
- Each row is independent, enabling train/test splits and shuffling
- Easy to add labels (folder-based) and metadata (metadata.jsonl)
- Compatible with Dataset Viewer (one 3D render per row)
### Architecture
Uses `FolderBasedBuilder` pattern (like `ImageFolder`, `AudioFolder`):
```python
class PdbFolder(FolderBasedBuilder):
BASE_FEATURE = ProteinStructure
BASE_COLUMN_NAME = "structure"
EXTENSIONS = [".pdb", ".ent"]
```
### New `ProteinStructure` Feature Type
```python
# Arrow schema for lazy loading
pa.struct({"bytes": pa.binary(), "path": pa.string()})
# Decoded: returns structure file content as string
dataset = load_dataset("pdb", data_dir="structures/")
print(dataset[0]["structure"]) # Full PDB file content
```
### Supported Extensions
`.pdb`, `.ent`
### Usage
```python
from datasets import load_dataset
# Load from directory
dataset = load_dataset("pdb", data_dir="protein_structures/")
# Load with folder-based labels
# structures/
# enzymes/
# 1abc.pdb
# receptors/
# 2def.pdb
dataset = load_dataset("pdb", data_dir="structures/")
print(dataset[0]) # {"structure": "HEADER...", "label": "enzymes"}
# Load with metadata
# structures/
# 1abc.pdb
# metadata.jsonl # {"file_name": "1abc.pdb", "resolution": 2.5}
dataset = load_dataset("pdb", data_dir="structures/")
print(dataset[0]) # {"structure": "HEADER...", "resolution": 2.5}
# Drop labels or metadata
dataset = load_dataset("pdb", data_dir="structures/", drop_labels=True)
dataset = load_dataset("pdb", data_dir="structures/", drop_metadata=True)
```
### Test Results
All 28 PDB tests + 15 ProteinStructure feature tests pass.
### Related PRs
- #7925 - mmCIF support (same pattern)
- #7930 - Protein 3D visualization proposal
cc @lhoestq @georgia-hf
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7926/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7926",
"html_url": "https://github.com/huggingface/datasets/pull/7926",
"diff_url": "https://github.com/huggingface/datasets/pull/7926.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7926.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7925
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7925/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7925/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7925/events
|
https://github.com/huggingface/datasets/pull/7925
| 3,773,577,850
|
PR_kwDODunzps67JW3g
| 7,925
|
feat: Add mmCIF file support for macromolecular structures
|
{
"login": "behroozazarkhalili",
"id": 80390531,
"node_id": "MDQ6VXNlcjgwMzkwNTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/behroozazarkhalili",
"html_url": "https://github.com/behroozazarkhalili",
"followers_url": "https://api.github.com/users/behroozazarkhalili/followers",
"following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}",
"gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}",
"starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions",
"organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs",
"repos_url": "https://api.github.com/users/behroozazarkhalili/repos",
"events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}",
"received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-31T20:11:32
| 2026-01-09T19:22:55
| null |
NONE
| null | null | null | null |
## Summary
This PR adds support for loading mmCIF (macromolecular Crystallographic Information File) files with `load_dataset()`, following the **ImageFolder pattern** where **one row = one structure**.
Based on feedback from @lhoestq in #7930, this approach makes datasets more practical for ML workflows:
- Each row is independent, enabling train/test splits and shuffling
- Easy to add labels (folder-based) and metadata (metadata.jsonl)
- Compatible with Dataset Viewer (one 3D render per row)
### Architecture
Uses `FolderBasedBuilder` pattern (like `ImageFolder`, `AudioFolder`):
```python
class MmcifFolder(FolderBasedBuilder):
BASE_FEATURE = ProteinStructure
BASE_COLUMN_NAME = "structure"
EXTENSIONS = [".cif", ".mmcif"]
```
### New `ProteinStructure` Feature Type
```python
# Arrow schema for lazy loading
pa.struct({"bytes": pa.binary(), "path": pa.string()})
# Decoded: returns structure file content as string
dataset = load_dataset("mmcif", data_dir="structures/")
print(dataset[0]["structure"]) # Full mmCIF file content
```
### Supported Extensions
`.cif`, `.mmcif`
### Usage
```python
from datasets import load_dataset
# Load from directory
dataset = load_dataset("mmcif", data_dir="protein_structures/")
# Load with folder-based labels
# structures/
# enzymes/
# 1abc.cif
# receptors/
# 2def.cif
dataset = load_dataset("mmcif", data_dir="structures/")
print(dataset[0]) # {"structure": "data_...", "label": "enzymes"}
# Load with metadata
# structures/
# 1abc.cif
# metadata.jsonl # {"file_name": "1abc.cif", "resolution": 2.5}
dataset = load_dataset("mmcif", data_dir="structures/")
print(dataset[0]) # {"structure": "data_...", "resolution": 2.5}
# Drop labels or metadata
dataset = load_dataset("mmcif", data_dir="structures/", drop_labels=True)
dataset = load_dataset("mmcif", data_dir="structures/", drop_metadata=True)
```
### Test Results
All 24 mmCIF tests + 15 ProteinStructure feature tests pass.
### Related PRs
- #7926 - PDB support (same pattern)
- #7930 - Protein 3D visualization proposal
### References
- mmCIF specification: https://mmcif.wwpdb.org/
- PDB archive: https://www.rcsb.org/
cc @lhoestq @georgia-hf
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7925/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7925",
"html_url": "https://github.com/huggingface/datasets/pull/7925",
"diff_url": "https://github.com/huggingface/datasets/pull/7925.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7925.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7924
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7924/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7924/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7924/events
|
https://github.com/huggingface/datasets/pull/7924
| 3,773,509,771
|
PR_kwDODunzps67JHNF
| 7,924
|
Add lightweight FASTQ file format support
|
{
"login": "behroozazarkhalili",
"id": 80390531,
"node_id": "MDQ6VXNlcjgwMzkwNTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/behroozazarkhalili",
"html_url": "https://github.com/behroozazarkhalili",
"followers_url": "https://api.github.com/users/behroozazarkhalili/followers",
"following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}",
"gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}",
"starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions",
"organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs",
"repos_url": "https://api.github.com/users/behroozazarkhalili/repos",
"events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}",
"received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-31T19:46:42
| 2026-01-10T01:24:56
| null |
NONE
| null | null | null | null |
## Summary
This PR adds support for loading FASTQ files directly with `load_dataset()`.
FASTQ is an extension of FASTA that includes quality scores for each base, widely used for storing output from high-throughput sequencing instruments.
### Key Features
- **Zero external dependencies** - Pure Python parser based on [readfq.py](https://github.com/lh3/readfq) by Heng Li
- **Quality score support** - Preserves per-base quality scores as ASCII-encoded strings
- **Streaming support** - Generator-based parsing for memory efficiency with large NGS files
- **Compression support** - Automatic detection of gzip, bzip2, and xz compressed files
- **Large sequence support** - Uses `large_string` for both sequence and quality columns
- **Parquet-safe batching** - Dual-threshold batching (batch_size + max_batch_bytes) prevents page size errors
### Columns
| Column | Type | Description |
|--------|------|-------------|
| `id` | string | Sequence identifier (first word after `@`) |
| `description` | string | Full description line (everything after id) |
| `sequence` | large_string | The nucleotide sequence |
| `quality` | large_string | ASCII-encoded quality scores (Phred+33 by default) |
### Supported Extensions
`.fq`, `.fastq` (and compressed variants: `.fq.gz`, `.fastq.gz`, `.fq.bz2`, `.fq.xz`)
### Usage
```python
from datasets import load_dataset
# Load FASTQ file
dataset = load_dataset("fastq", data_files="reads.fastq")
# Load gzipped file
dataset = load_dataset("fastq", data_files="reads.fq.gz")
# Filter columns
dataset = load_dataset("fastq", data_files="reads.fq", columns=["sequence", "quality"])
```
### Quality Score Format
Quality scores use Sanger/Illumina 1.8+ encoding (Phred+33):
- ASCII character `\!` (33) = quality 0
- ASCII character `I` (73) = quality 40
### Testing
- 22 comprehensive tests covering basic loading, multi-line sequences, compression, batching, schema types, and edge cases
- All tests passing
- Linting clean
### References
- Follows pattern established in #7923 (FASTA support)
- Parser based on: https://github.com/lh3/readfq
- Addresses feedback from #7851
cc: @georgia-hf
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7924/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7924",
"html_url": "https://github.com/huggingface/datasets/pull/7924",
"diff_url": "https://github.com/huggingface/datasets/pull/7924.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7924.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7923
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7923/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7923/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7923/events
|
https://github.com/huggingface/datasets/pull/7923
| 3,773,472,998
|
PR_kwDODunzps67I-y3
| 7,923
|
feat(fasta): add lightweight FASTA file format support
|
{
"login": "behroozazarkhalili",
"id": 80390531,
"node_id": "MDQ6VXNlcjgwMzkwNTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/behroozazarkhalili",
"html_url": "https://github.com/behroozazarkhalili",
"followers_url": "https://api.github.com/users/behroozazarkhalili/followers",
"following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}",
"gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}",
"starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions",
"organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs",
"repos_url": "https://api.github.com/users/behroozazarkhalili/repos",
"events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}",
"received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-31T19:33:00
| 2026-01-10T01:13:05
| null |
NONE
| null | null | null | null |
## Summary
This PR adds support for loading FASTA files directly with `load_dataset()`, addressing feedback from #7851.
FASTA is a text-based format for representing nucleotide sequences (DNA/RNA) or peptide sequences (proteins), widely used in bioinformatics.
## Key Features
- **Zero external dependencies** - Uses a lightweight pure Python parser based on [readfq.py](https://github.com/lh3/readfq) by Heng Li
- **Streaming support** - Generator-based parsing for memory efficiency with large genomic files
- **Compression support** - Automatic detection and handling of gzip, bzip2, and xz compressed files via magic bytes
- **Large sequence support** - Uses `large_string` Arrow type to handle viral genomes and long sequences (fixes UTF-8 overflow)
- **Adaptive batching** - `max_batch_bytes` parameter (default 256MB) prevents Parquet page size errors with very large sequences
## Technical Decisions (Addressing #7851 Feedback)
| Concern | Solution |
|---------|----------|
| Long sequences → UTF-8 overflow (@apcamargo, @UriNeri) | Uses `pa.large_string()` for sequence column |
| BioPython is overkill (@apcamargo) | Pure Python parser based on Heng Li's readfq.py |
| Parquet page size limit i32::MAX (@UriNeri) | Adaptive dual-threshold batching with `max_batch_bytes` |
## Columns
| Column | Type | Description |
|--------|------|-------------|
| `id` | string | Sequence identifier (first word after `>`) |
| `description` | string | Full description line (everything after id) |
| `sequence` | large_string | The biological sequence (DNA/RNA/protein) |
## Supported Extensions
`.fa`, `.fasta`, `.fna`, `.ffn`, `.faa`, `.frn` (and compressed variants)
## Usage
```python
from datasets import load_dataset
# Load FASTA file
dataset = load_dataset("fasta", data_files="sequences.fasta")
# Load with column filtering
dataset = load_dataset("fasta", data_files="sequences.fa", columns=["id", "sequence"])
# Load gzipped file
dataset = load_dataset("fasta", data_files="sequences.fa.gz")
# Configure batching for very large genomes
dataset = load_dataset("fasta", data_files="genome.fasta", max_batch_bytes=128*1024*1024)
```
## Test Plan
- [x] Basic FASTA loading (3 sequences, multi-line)
- [x] Multiple extension support (.fa, .fasta, .fna, .ffn, .faa, .frn)
- [x] Compression formats (gzip, bz2, xz)
- [x] Long sequences with `large_string` type
- [x] Column filtering
- [x] Batch size configuration
- [x] Byte-based batching (`max_batch_bytes`)
- [x] Large genome handling (simulated 50KB sequences)
- [x] Empty description handling
- [x] Multiple files loading
- [x] Custom feature casting
All 22 tests passing.
cc: @georgia-hf
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7923/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7923",
"html_url": "https://github.com/huggingface/datasets/pull/7923",
"diff_url": "https://github.com/huggingface/datasets/pull/7923.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7923.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7922
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7922/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7922/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7922/events
|
https://github.com/huggingface/datasets/issues/7922
| 3,772,247,021
|
I_kwDODunzps7g1-vt
| 7,922
|
Support Apache TsFile Datasets
|
{
"login": "qiaojialin",
"id": 7240743,
"node_id": "MDQ6VXNlcjcyNDA3NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7240743?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qiaojialin",
"html_url": "https://github.com/qiaojialin",
"followers_url": "https://api.github.com/users/qiaojialin/followers",
"following_url": "https://api.github.com/users/qiaojialin/following{/other_user}",
"gists_url": "https://api.github.com/users/qiaojialin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qiaojialin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qiaojialin/subscriptions",
"organizations_url": "https://api.github.com/users/qiaojialin/orgs",
"repos_url": "https://api.github.com/users/qiaojialin/repos",
"events_url": "https://api.github.com/users/qiaojialin/events{/privacy}",
"received_events_url": "https://api.github.com/users/qiaojialin/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null |
[
"A large quantity of industrial timeseries data has been stored as TsFile, and I have been constantly hearing about AI fellows complaining about the lack of data or the insufficiency of data quality.\n\nI like the ambition that uses TsFile as the bridge between AI research and industrial analysis requirements. This may help both sides improve their works with high-quality data and realtime data access.",
"It will be so convenient to have such a method to directly load tsfile into memory for further analysis.",
"Looking forward to see the tsfile become the part of the AI eco-systems.",
"Looking forward to the support for TsFile format!",
"Hey folks! I’ve added TsFile support by following the existing HDF5/Parquet patterns.\n\nThis includes:\n\nA TsFile builder with schema inference from file metadata\n\nTime-range filtering and column selection\n\nMemory-efficient reading using the tsfile library’s iterator API\n\n11 tests, all passing ✅\n\nI’ll be opening a PR shortly, would love any suggestions or feedback you might have!"
] | 2025-12-31T08:07:51
| 2026-01-05T08:23:21
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Feature request
I would love to use Hugging Face datasets library to directly load datasets composed of .tsfile files, for example:
`ds = load_dataset("username/dataset-with-tsfile-files")`
This feature would allow researchers working on time-series tasks to seamlessly integrate datasets stored in the Apache TsFile format into the Hugging Face ecosystem.
### Motivation
[Apache TsFile](https://tsfile.apache.org/) is a mature Apache project and a dedicated file format designed for efficient time-series data storage and retrieval. The repository is [here](https://github.com/apache/tsfile).
It has been widely adopted in the IoT community and serves as the underlying storage format for projects like [Apache IoTDB](https://iotdb.apache.org/).
Apache TsFile has the following advantages in the time-series area:
- Time-series native schema. Time-series data is organized by device and sensor IDs.
- A complete multi-language API (Python, Java, C++, C) for reading and writing tsfile.
- Superior write throughput and query efficiency.
- High compression ratio through per-series encoding and compression schemes.
- Efficient dataset transformation. ETL-free file compaction and efficient random access to time-series chunks, enabling faster data loading and lower query latency.
These properties make TsFile highly suitable for time-series model training, especially where time-series random access and efficient I/O are critical.
More details can be referred from this paper “[Apache TsFile: An IoT-native Time Series File Format (VLDB 2024)](https://www.vldb.org/pvldb/vol17/p4064-song.pdf)”.
Integrating TsFile support into datasets will benefit the broader machine learning community working on tasks such as forecasting and anomaly detection.
### Your contribution
As a member of the TsFile community, I recently initiated a [proposal ](https://lists.apache.org/thread/119vc9nh03dz4583cx9fwt83fp8v68vy)to integrate TsFile with Huggingface, which has received enthusiastic responses from the community.
We are willing to do the following contributions:
- Implement and contribute the PR that adds TsFile dataset support to Hugging Face datasets.
- Provide long-term maintenance for this integration.
- Any other needs for TsFile to support large-scale time-series datasets.
We are excited to contribute and continuously participate in the future evolution of TsFile and datasets to better support time-series data workload.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7922/reactions",
"total_count": 24,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 4,
"confused": 0,
"heart": 4,
"rocket": 6,
"eyes": 4
}
|
https://api.github.com/repos/huggingface/datasets/issues/7922/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7921
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7921/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7921/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7921/events
|
https://github.com/huggingface/datasets/pull/7921
| 3,766,879,197
|
PR_kwDODunzps66zE_q
| 7,921
|
Add beginner-friendly quick installation verification tip in README
|
{
"login": "ashupaul2005-byte",
"id": 237550974,
"node_id": "U_kgDODii9fg",
"avatar_url": "https://avatars.githubusercontent.com/u/237550974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ashupaul2005-byte",
"html_url": "https://github.com/ashupaul2005-byte",
"followers_url": "https://api.github.com/users/ashupaul2005-byte/followers",
"following_url": "https://api.github.com/users/ashupaul2005-byte/following{/other_user}",
"gists_url": "https://api.github.com/users/ashupaul2005-byte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ashupaul2005-byte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashupaul2005-byte/subscriptions",
"organizations_url": "https://api.github.com/users/ashupaul2005-byte/orgs",
"repos_url": "https://api.github.com/users/ashupaul2005-byte/repos",
"events_url": "https://api.github.com/users/ashupaul2005-byte/events{/privacy}",
"received_events_url": "https://api.github.com/users/ashupaul2005-byte/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-29T09:22:27
| 2026-01-29T09:58:05
| null |
NONE
| null | null | null | null |
This PR adds a small beginner-friendly tip to help users quickly verify whether 🤗 Datasets is installed correctly by loading a simple dataset.
This improves onboarding experience for first-time users and reduces confusion for beginners.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7921/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7921",
"html_url": "https://github.com/huggingface/datasets/pull/7921",
"diff_url": "https://github.com/huggingface/datasets/pull/7921.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7921.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7920
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7920/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7920/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7920/events
|
https://github.com/huggingface/datasets/pull/7920
| 3,766,070,566
|
PR_kwDODunzps66wgLx
| 7,920
|
Add progress_format support for machine-readable progress output
|
{
"login": "podarok",
"id": 563412,
"node_id": "MDQ6VXNlcjU2MzQxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/563412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/podarok",
"html_url": "https://github.com/podarok",
"followers_url": "https://api.github.com/users/podarok/followers",
"following_url": "https://api.github.com/users/podarok/following{/other_user}",
"gists_url": "https://api.github.com/users/podarok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/podarok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/podarok/subscriptions",
"organizations_url": "https://api.github.com/users/podarok/orgs",
"repos_url": "https://api.github.com/users/podarok/repos",
"events_url": "https://api.github.com/users/podarok/events{/privacy}",
"received_events_url": "https://api.github.com/users/podarok/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-28T22:35:24
| 2025-12-28T22:35:24
| null |
NONE
| null | null | null | null |
## Summary
Adds support to , enabling machine-readable JSON progress output similar to [huggingface/tokenizers#1921](https://github.com/huggingface/tokenizers/pull/1921).
## Motivation
When using `datasets` in automated pipelines or UI applications, it's useful to emit machine-readable progress instead of ANSI progress bars. This PR adds the same `progress_format` option that was implemented in tokenizers.
## Changes
### New Functions
- `set_progress_format(format: str)`: Set global progress format
- `get_progress_format() -> str`: Get current progress format
### Supported Formats
1. **"tqdm"** (default): Interactive progress bars
2. **"json"**: Machine-readable JSON lines to stderr
3. **"silent"**: No output
### JSON Format
When `progress_format="json"`, emits JSON every 5% progress change or completion:
```json
{"stage":"Processing","current":50,"total":100,"percent":50.0}
```
## Usage Example
```python
from datasets import load_dataset
from datasets.utils import set_progress_format
# Enable JSON output
set_progress_format("json")
# Progress will now be emitted as JSON lines
dataset = load_dataset("Goader/kobza", split="train", streaming=True)
for sample in dataset:
process(sample)
```
## Implementation Details
- Suppresses visual output using `io.StringIO()` when format is "json"
- Keeps progress tracking active (unlike `disable=True`)
- Emits JSON to stderr every 5% progress change
- Exports new functions from `datasets.utils`
## Cross-Reference
This implementation mirrors the approach from:
- [huggingface/tokenizers#1921](https://github.com/huggingface/tokenizers/pull/1921)
## Testing
Tested with:
```python
from datasets.utils import set_progress_format, tqdm
set_progress_format('json')
for i in tqdm(range(100), desc='Test'):
process(i)
# Outputs: {"stage":"Test","current":10,"total":100,"percent":10.0}
```
## Checklist
- [x] New functions added to `datasets.utils.tqdm`
- [x] Functions exported from `datasets.utils.__init__`
- [x] JSON format emits to stderr
- [x] Visual output suppressed when format="json"
- [x] Progress tracking remains active
- [x] Cross-referenced with tokenizers#1921
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7920/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7920",
"html_url": "https://github.com/huggingface/datasets/pull/7920",
"diff_url": "https://github.com/huggingface/datasets/pull/7920.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7920.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7919
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7919/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7919/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7919/events
|
https://github.com/huggingface/datasets/pull/7919
| 3,765,768,457
|
PR_kwDODunzps66vmQC
| 7,919
|
Fix load_from_disk progress bar with redirected stdout
|
{
"login": "omarfarhoud",
"id": 118056245,
"node_id": "U_kgDOBwllNQ",
"avatar_url": "https://avatars.githubusercontent.com/u/118056245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarfarhoud",
"html_url": "https://github.com/omarfarhoud",
"followers_url": "https://api.github.com/users/omarfarhoud/followers",
"following_url": "https://api.github.com/users/omarfarhoud/following{/other_user}",
"gists_url": "https://api.github.com/users/omarfarhoud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarfarhoud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarfarhoud/subscriptions",
"organizations_url": "https://api.github.com/users/omarfarhoud/orgs",
"repos_url": "https://api.github.com/users/omarfarhoud/repos",
"events_url": "https://api.github.com/users/omarfarhoud/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarfarhoud/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"this seems to contradict the comment that says \r\n\r\n> set `disable=None` rather than `disable=False` by default to disable progress bar when no TTY attached\r\n\r\nI believe the right approach is to do the same as in https://github.com/huggingface/huggingface_hub/pull/2698",
"> this seems to contradict the comment that says\r\n> \r\n> > set `disable=None` rather than `disable=False` by default to disable progress bar when no TTY attached\r\n> \r\n> I believe the right approach is to do the same as in [huggingface/huggingface_hub#2698](https://github.com/huggingface/huggingface_hub/pull/2698)\r\n\r\nUpdated to check TQDM_POSITION=-1 to force-enable progress bars in cloud environments, \r\nfollowing the same pattern as huggingface_hub#2698.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7919). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Moved the TQDM_POSITION check to the tqdm class in utils/tqdm.py so all progress bars \r\nin the codebase have consistent behavior. Thanks for the suggestion!",
"@lhoestq thanks again for the suggestion. I’ve applied it and everything should now be consistent across all tqdm usage. Happy to adjust anything else if needed."
] | 2025-12-28T15:39:31
| 2026-01-16T14:44:49
| 2026-01-16T14:44:49
|
CONTRIBUTOR
| null | null | null | null |
Fixes #7918
## Problem
When using `load_from_disk()` with `contextlib.redirect_stdout()`, the progress bar was not showing even for datasets with >16 files.
## Root Cause
The `disable` parameter was set to `None` which triggers TTY auto-detection. This fails when stdout is redirected, causing the progress bar to be hidden.
## Solution
Changed `disable=len(state["_data_files"]) <= 16 or None` to `disable=len(state["_data_files"]) <= 16` to force the progress bar to show for datasets with >16 files, regardless of stdout redirection.
## Testing
Verified that progress bars now appear correctly both with and without stdout redirection for datasets with >16 shards.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7919/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7919",
"html_url": "https://github.com/huggingface/datasets/pull/7919",
"diff_url": "https://github.com/huggingface/datasets/pull/7919.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7919.patch",
"merged_at": "2026-01-16T14:44:49"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7918
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7918/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7918/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7918/events
|
https://github.com/huggingface/datasets/issues/7918
| 3,765,489,462
|
I_kwDODunzps7gcM82
| 7,918
|
datasets.load_from_disk doesn't show progress bar
|
{
"login": "Tommigun1980",
"id": 60286968,
"node_id": "MDQ6VXNlcjYwMjg2OTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/60286968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tommigun1980",
"html_url": "https://github.com/Tommigun1980",
"followers_url": "https://api.github.com/users/Tommigun1980/followers",
"following_url": "https://api.github.com/users/Tommigun1980/following{/other_user}",
"gists_url": "https://api.github.com/users/Tommigun1980/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tommigun1980/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tommigun1980/subscriptions",
"organizations_url": "https://api.github.com/users/Tommigun1980/orgs",
"repos_url": "https://api.github.com/users/Tommigun1980/repos",
"events_url": "https://api.github.com/users/Tommigun1980/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tommigun1980/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"#self-assign"
] | 2025-12-28T09:14:41
| 2026-01-16T14:44:50
| 2026-01-16T14:44:50
|
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Describe the bug
This is the inverse of the bug at [https://github.com/huggingface/datasets/issues/7030](https://github.com/huggingface/datasets/issues/7030), i.e. that `datasets.load_from_disk(path)` displays no progress bar. My dataset has > 16 files in it.
I am redirecting stdout as I capture the log, could this have something to do with it? All other progress bars work fine though except for HF dataset progress bars.
### Steps to reproduce the bug
```py
with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
datasets.load_from_disk(path)
```
### Expected behavior
The progress bar should show when loading a dataset.
### Environment info
Python 3.13.9
Datasets 4.4.1
macOS Tahoe 26.2
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7918/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7917
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7917/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7917/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7917/events
|
https://github.com/huggingface/datasets/issues/7917
| 3,764,913,807
|
I_kwDODunzps7gaAaP
| 7,917
|
IterableDataset supports automatic sharding
|
{
"login": "howitry",
"id": 61858900,
"node_id": "MDQ6VXNlcjYxODU4OTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/61858900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/howitry",
"html_url": "https://github.com/howitry",
"followers_url": "https://api.github.com/users/howitry/followers",
"following_url": "https://api.github.com/users/howitry/following{/other_user}",
"gists_url": "https://api.github.com/users/howitry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/howitry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/howitry/subscriptions",
"organizations_url": "https://api.github.com/users/howitry/orgs",
"repos_url": "https://api.github.com/users/howitry/repos",
"events_url": "https://api.github.com/users/howitry/events{/privacy}",
"received_events_url": "https://api.github.com/users/howitry/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"You can already use `.shard()` instead like this:\n\n```python\ndataset = dataset.shard(index=rank, num_shards=world_size)\n```\n\nnote that it requires that `dataset.num_shards >= world_size`, and that it may result in nodes having the same number of shards +- 1",
"> You can already use `.shard()` instead like this:\n> \n> dataset = dataset.shard(index=rank, num_shards=world_size)\n> note that it requires that `dataset.num_shards >= world_size`, and that it may result in nodes having the same number of shards +- 1\n\nThis means I have to ensure that the initial num_shards is greater than the number of GPUs I use each time, which seems inflexible. Is there a way to dynamically divide the data into multiple shards based on the number of GPUs used each time? For example:\n```\ndataset = load_dataset(*, stream=True) # dataset.num_shards()=1\nnum_shards=world_size*dataloader_num_workers\ndataset = dataset.dynamically_shard(num_shards=num_shards, num_samples=num_samples) #We may need to know the total number of samples (num_samples) in advance.\n```\n\n",
"> Is there a way to dynamically divide the data into multiple shards based on the number of GPUs used each time?\n\nNo it's not possible without either\n\n1. doing data skipping, which degrades the data loading performance significantly (every node has to download the same data and skip most samples)\n2. or divide the original files further, which requires additional logic for every file format\n\nI would be interested in exploring 2 though, maybe if we start with Parquet support. Right now it fails because `ArrowExamplesIterable` doesn't know how to shard more than num_shards. We could have instead a `ReshardableArrowExamplesIterable` that would pass the right arguments to `_generate_tables()` in parquet.py to only read the data requested for a specific node",
"> ReshardableArrowExamplesIterable\n\nOkay, my datasets are all on my local disk, so I haven't considered the overhead of data download. Are there any tutorials on creating custom iterable datasets? For example, a custom `iterabledataset.__iter__` function can be used to skip data, and it can inherit operations like `iterabledataset.map`."
] | 2025-12-27T16:48:29
| 2025-12-29T16:06:52
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Feature request
Added sharding function support to the streaming IterableDataset, allowing users to adjust the number of shards according to their training resources. For example:
```
dataset = load_dataset(*, stream=True)
dataset = dataset.shard(num_shards=num_shards, num_samples=num_samples) #We may need to know the total number of samples (num_samples) in advance.
```
### Motivation
When performing large-scale pre-training in a distributed environment, large datasets may only be loaded in a streaming manner. To improve training efficiency, my current approach is as follows:
```
file_type="parquet"
dataset_path="./*.parquet"
dataset = load_dataset(file_type,data_files=dataset_path, stream=True)
dataset = split_dataset_by_node(dataset, rank=rank, world_size=world_size)
```
I split a large file into N = world_size * dataloader_num_workers files and placed them under dataset_path. This ensures that each GPU processes different shards. However, this approach has some issues. If the number of GPUs used to train the model changes next time, I need to split the large file again to ensure that IterableDataset.num_shards = world_size * dataloader_num_workers.
I'd like to know if there's a better approach, such as directly loading the large dataset in a streaming manner and then sharding the IterableDataset based on the number of GPUs and num_workers, similar to the approach in Example 1 of https://docs.pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset @lhoestq
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7917/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7916
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7916/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7916/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7916/events
|
https://github.com/huggingface/datasets/issues/7916
| 3,764,901,707
|
I_kwDODunzps7gZ9dL
| 7,916
|
No description provided.
|
{
"login": "howitry",
"id": 61858900,
"node_id": "MDQ6VXNlcjYxODU4OTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/61858900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/howitry",
"html_url": "https://github.com/howitry",
"followers_url": "https://api.github.com/users/howitry/followers",
"following_url": "https://api.github.com/users/howitry/following{/other_user}",
"gists_url": "https://api.github.com/users/howitry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/howitry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/howitry/subscriptions",
"organizations_url": "https://api.github.com/users/howitry/orgs",
"repos_url": "https://api.github.com/users/howitry/repos",
"events_url": "https://api.github.com/users/howitry/events{/privacy}",
"received_events_url": "https://api.github.com/users/howitry/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null |
[] | 2025-12-27T16:33:11
| 2025-12-27T16:45:22
| 2025-12-27T16:45:22
|
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
| null |
{
"login": "howitry",
"id": 61858900,
"node_id": "MDQ6VXNlcjYxODU4OTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/61858900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/howitry",
"html_url": "https://github.com/howitry",
"followers_url": "https://api.github.com/users/howitry/followers",
"following_url": "https://api.github.com/users/howitry/following{/other_user}",
"gists_url": "https://api.github.com/users/howitry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/howitry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/howitry/subscriptions",
"organizations_url": "https://api.github.com/users/howitry/orgs",
"repos_url": "https://api.github.com/users/howitry/repos",
"events_url": "https://api.github.com/users/howitry/events{/privacy}",
"received_events_url": "https://api.github.com/users/howitry/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7916/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7915
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7915/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7915/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7915/events
|
https://github.com/huggingface/datasets/issues/7915
| 3,762,042,396
|
I_kwDODunzps7gPDYc
| 7,915
|
GDPval dataset Word docs corrupted
|
{
"login": "alexheat",
"id": 12248575,
"node_id": "MDQ6VXNlcjEyMjQ4NTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/12248575?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexheat",
"html_url": "https://github.com/alexheat",
"followers_url": "https://api.github.com/users/alexheat/followers",
"following_url": "https://api.github.com/users/alexheat/following{/other_user}",
"gists_url": "https://api.github.com/users/alexheat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexheat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexheat/subscriptions",
"organizations_url": "https://api.github.com/users/alexheat/orgs",
"repos_url": "https://api.github.com/users/alexheat/repos",
"events_url": "https://api.github.com/users/alexheat/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexheat/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"tentatively tagging @simonpfish ^\n\n(if it's an option you could enable PRs/Discussions on your dataset on HF)"
] | 2025-12-25T13:56:55
| 2025-12-26T09:06:13
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
The [openai/gdpval](https://huggingface.co/datasets/openai/gdpval) dataset on Hugging Face contains Word .docx files with two types of corruption that cause Microsoft Word to display an "unreadable content" error.
### Root Causes
1. **Corrupted settings.xml**: The `word/settings.xml` file uses incorrect namespace prefixes (`ns0:`, `ns1:`, etc.) instead of the proper prefixes (`w:`, `mc:`, `m:`, etc.)
2. **Malformed TargetMode attributes**: Some files have `TargetMode="External"` attributes missing their closing `/>` tag in hyperlink relationships
Both issues cause Word to reject the files even though the XML structure is technically valid.
I have a fix for the issue here https://github.com/alexheat/gdpval-docx-fix
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7915/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7914
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7914/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7914/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7914/events
|
https://github.com/huggingface/datasets/issues/7914
| 3,760,894,100
|
I_kwDODunzps7gKrCU
| 7,914
|
[ROCm] please install 'torchcodec'
|
{
"login": "AndreasKaratzas",
"id": 42451412,
"node_id": "MDQ6VXNlcjQyNDUxNDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/42451412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AndreasKaratzas",
"html_url": "https://github.com/AndreasKaratzas",
"followers_url": "https://api.github.com/users/AndreasKaratzas/followers",
"following_url": "https://api.github.com/users/AndreasKaratzas/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreasKaratzas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AndreasKaratzas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreasKaratzas/subscriptions",
"organizations_url": "https://api.github.com/users/AndreasKaratzas/orgs",
"repos_url": "https://api.github.com/users/AndreasKaratzas/repos",
"events_url": "https://api.github.com/users/AndreasKaratzas/events{/privacy}",
"received_events_url": "https://api.github.com/users/AndreasKaratzas/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"I was able to install torchcodec by building it from source and have put together a PR: https://github.com/vllm-project/vllm/pull/31323\n\nStill I think it would make this framework more robust to add at least one fallback lib (that is more widely used) in place should torchcodec installation fail or library is not found."
] | 2025-12-24T19:39:17
| 2025-12-28T07:25:42
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Describe the bug
Datasets library is widely used by many Python packages. Naturally, it is a requirement on many platforms. This includes vLLM for ROCm. During audio dataset tests, there is an exception triggered:
```python
def decode_example(
self, value: dict, token_per_repo_id: Optional[dict[str, Union[str, bool, None]]] = None
) -> "AudioDecoder":
"""Decode example audio file into audio data.
Args:
value (`dict`):
A dictionary with keys:
- `path`: String with relative audio file path.
- `bytes`: Bytes of the audio file.
token_per_repo_id (`dict`, *optional*):
To access and decode
audio files from private repositories on the Hub, you can pass
a dictionary repo_id (`str`) -> token (`bool` or `str`)
Returns:
`torchcodec.decoders.AudioDecoder`
"""
if config.TORCHCODEC_AVAILABLE:
from ._torchcodec import AudioDecoder
else:
> raise ImportError("To support decoding audio data, please install 'torchcodec'.")
E ImportError: To support decoding audio data, please install 'torchcodec'.
```
At the same time, `torchcodec` cannot be installed on ROCm, because Its GPU acceleration uses NVIDIA's NVDEC (hardware decoder), which is NVIDIA-specific. Therefore, code paths that call this block trigger errors on ROCm. Can you add an alternative package there as fallback instead of an ImportError?
### Steps to reproduce the bug
On a machine with MI300/MI325/MI355:
```bash
pytest -s -v tests/entrypoints/openai/correctness/test_transcription_api_correctness.py::test_wer_correctness[12.74498-D4nt3/esb-datasets-earnings22-validation-tiny-filtered-openai/whisper-large-v3]
```
### Expected behavior
```log
_________________________________________________ test_wer_correctness[12.74498-D4nt3/esb-datasets-earnings22-validation-tiny-filtered-openai/whisper-large-v3] ________________________________________[383/535$
model_name = 'openai/whisper-large-v3', dataset_repo = 'D4nt3/esb-datasets-earnings22-validation-tiny-filtered', expected_wer = 12.74498, n_examples = -1, max_concurrent_request = None
@pytest.mark.parametrize("model_name", ["openai/whisper-large-v3"])
# Original dataset is 20GB+ in size, hence we use a pre-filtered slice.
@pytest.mark.parametrize(
"dataset_repo", ["D4nt3/esb-datasets-earnings22-validation-tiny-filtered"]
)
# NOTE: Expected WER measured with equivalent hf.transformers args:
# whisper-large-v3 + esb-datasets-earnings22-validation-tiny-filtered.
@pytest.mark.parametrize("expected_wer", [12.744980])
def test_wer_correctness(
model_name, dataset_repo, expected_wer, n_examples=-1, max_concurrent_request=None
):
# TODO refactor to use `ASRDataset`
with RemoteOpenAIServer(model_name, ["--enforce-eager"]) as remote_server:
> dataset = load_hf_dataset(dataset_repo)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
tests/entrypoints/openai/correctness/test_transcription_api_correctness.py:160:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/entrypoints/openai/correctness/test_transcription_api_correctness.py:111: in load_hf_dataset
if "duration_ms" not in dataset[0]:
^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py:2876: in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py:2858: in _getitem
formatted_output = format_table(
/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py:658: in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py:411: in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py:460: in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py:224: in decode_row
return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/features/features.py:2111: in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/features/features.py:1419: in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
### Environment info
- `datasets` version: 4.4.2
- Platform: Linux-5.15.0-161-generic-x86_64-with-glibc2.35
- Python version: 3.12.12
- `huggingface_hub` version: 0.36.0
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.10.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7914/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7913
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7913/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7913/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7913/events
|
https://github.com/huggingface/datasets/pull/7913
| 3,758,884,376
|
PR_kwDODunzps66aEsF
| 7,913
|
Add lance format support
|
{
"login": "eddyxu",
"id": 17097,
"node_id": "MDQ6VXNlcjE3MDk3",
"avatar_url": "https://avatars.githubusercontent.com/u/17097?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eddyxu",
"html_url": "https://github.com/eddyxu",
"followers_url": "https://api.github.com/users/eddyxu/followers",
"following_url": "https://api.github.com/users/eddyxu/following{/other_user}",
"gists_url": "https://api.github.com/users/eddyxu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eddyxu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eddyxu/subscriptions",
"organizations_url": "https://api.github.com/users/eddyxu/orgs",
"repos_url": "https://api.github.com/users/eddyxu/repos",
"events_url": "https://api.github.com/users/eddyxu/events{/privacy}",
"received_events_url": "https://api.github.com/users/eddyxu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Mentioned https://github.com/huggingface/datasets/issues/7863 as well",
"@pdames for vis",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7913). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Cool ! I notice the current implementation doesn't support streaming because of the symlink hack.\r\n\r\nI believe you can do something like this instead:\r\n\r\n```python\r\ndef _generate_tables(self, paths: list[str]):\r\n for path in paths:\r\n ds = lance.dataset(path)\r\n for frag_idx, fragment in enumerate(ds.get_fragments()):\r\n for batch_idx, batch in enumerate(\r\n fragment.to_batches(columns=self.config.columns, batch_size=self.config.batch_size)\r\n ):\r\n table = pa.Table.from_batches([batch])\r\n table = self._cast_table(table)\r\n yield Key(frag_idx, batch_idx), table\r\n```\r\n\r\nnote that path can be a local one, but also a `hf://` URI",
"@lhoestq Take another look? ",
"I took the liberty to make a few changes :)\r\n\r\nNow I believe we should be good:\r\n- both local and streaming work fine\r\n- both dataset and single files work fine\r\n- all files are properly downloaded now than all files and metadata files are included in config.data_files\r\n- sharding is supported:\r\n - dataset: one shard = one fragment\r\n - single files: one shard = one file \r\n- streaming dataset resuming works fine thanks to Key()\r\n- the two hacks are visible and with TODOs to remove them when possible\r\n 1. remove the revision in HF uris since only \"main\" is supported\r\n 2. write proper _version/* files since lance doesn't work if they are symlinks\r\n\r\nI think this PR is ready, just let me know what you think before we merge 🚀 \r\n\r\nThe next steps are:\r\n- open a PR in this repository to document Lance support in `datasets`\r\n- open a PR in https://github.com/huggingface/hub-docs to add `pylance` to the list of integrated library on HF, and have some documentation on how to use it with datasets on HF (here is an example [PR](https://github.com/huggingface/hub-docs/pull/1892))\r\n- open a PR in https://github.com/huggingface/huggingface.js to add Lance as a supported dataset library on the HF website (here is an example [PR](https://github.com/huggingface/huggingface.js/pull/1870))\r\n\r\nFeel free to start some drafts (I noticed there are great examples in your HF account now !), I'll be happy to review :)\r\n\r\nAnd once Lance is available in huggingface.js and docs are ready we'll be ready to enable the Dataset Viewer and Lance code snippets on HF !"
] | 2025-12-24T00:52:20
| 2026-01-09T10:48:29
| 2026-01-09T10:48:29
|
CONTRIBUTOR
| null | null | null | null |
Add lance format as one of the `packaged_modules`.
```py
import datasets
ds = datasets.load_dataset("org/lance_repo", split="train")
# Or
ds = datasets.load_dataset("./local/data.lance")
```
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7913/reactions",
"total_count": 5,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7913/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7913",
"html_url": "https://github.com/huggingface/datasets/pull/7913",
"diff_url": "https://github.com/huggingface/datasets/pull/7913.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7913.patch",
"merged_at": "2026-01-09T10:48:29"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7912
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7912/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7912/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7912/events
|
https://github.com/huggingface/datasets/pull/7912
| 3,755,023,829
|
PR_kwDODunzps66NQzG
| 7,912
|
fix low but large example indexerror
|
{
"login": "CloseChoice",
"id": 31857876,
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CloseChoice",
"html_url": "https://github.com/CloseChoice",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7912). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-12-22T19:53:59
| 2026-01-09T13:23:52
| 2026-01-09T13:23:51
|
CONTRIBUTOR
| null | null | null | null |
Fixes #7911.
This PR simply implements the approach outlined in the corresponding issue, that if we have large examples, the number of shards should never be more than the number of samples. This is an absolute edge case, but can happen for image data.
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7912/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7912",
"html_url": "https://github.com/huggingface/datasets/pull/7912",
"diff_url": "https://github.com/huggingface/datasets/pull/7912.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7912.patch",
"merged_at": "2026-01-09T13:23:51"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7911
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7911/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7911/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7911/events
|
https://github.com/huggingface/datasets/issues/7911
| 3,753,447,559
|
I_kwDODunzps7fuRCH
| 7,911
|
IndexError when saving few large examples to disk
|
{
"login": "CloseChoice",
"id": 31857876,
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CloseChoice",
"html_url": "https://github.com/CloseChoice",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-12-22T11:33:19
| 2026-01-09T13:23:53
| 2026-01-09T13:23:52
|
CONTRIBUTOR
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Describe the bug
I ran into this issue when processing a file (900MB) with just one example but simplified for a quicker reproducer below. The problem is that, if `num_shards` is not explicitly set, we calculate it manually using https://github.com/huggingface/datasets/blob/main/src/datasets/utils/py_utils.py#L96 with the default `config.MAX_SHARD_SIZE` which is 500MB. If a single example is now larger than this, we run into an index error since the shards should be processed individually.
An easy workaround is:
`dataset.save_to_disk(output_path, max_shard_size="1GB")` or `dataset.save_to_disk(output_path, num_shards=1)`.
I believe this should be fixed and can happen in edge cases for image data, especially when just testing single partitions. The fix would be rather easy, just using a `num_shards = min(num_examples, <previously_calculated_num_shards>)`
### Steps to reproduce the bug
```python
from datasets import Dataset
target_size = 2 * 1024 * 1024 # 2 MB in bytes
base_text = (
"This is a sample sentence that will be repeated many times to create a large dataset. "
* 100
)
large_text = ""
while len(large_text.encode("utf-8")) < target_size:
large_text += base_text
actual_size = len(large_text.encode("utf-8"))
size_mb = actual_size / (1024 * 1024)
data = {"text": [large_text], "label": [0], "id": [1]}
dataset = Dataset.from_dict(data)
output_path = "./sample_dataset"
# make sure this is split into 2 shards
dataset.save_to_disk(output_path, max_shard_size="1MB")
```
this results in
```
```bash
Saving the dataset (1/3 shards): 100%|████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 162.96 examples/s]
Traceback (most recent call last):
File "/home/tpitters/programming/toy-mmu/create_dataset.py", line 27, in <module>
dataset.save_to_disk(output_path, max_shard_size="1MB")
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 1640, in save_to_disk
for kwargs in kwargs_per_job:
^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 1617, in <genexpr>
"shard": self.shard(num_shards=num_shards, index=shard_idx, contiguous=True),
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 4987, in shard
return self.select(
~~~~~~~~~~~^
indices=indices,
^^^^^^^^^^^^^^^^
...<2 lines>...
writer_batch_size=writer_batch_size,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 562, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/fingerprint.py", line 442, in wrapper
out = func(dataset, *args, **kwargs)
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 4104, in select
return self._select_contiguous(start, length, new_fingerprint=new_fingerprint)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 562, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/fingerprint.py", line 442, in wrapper
out = func(dataset, *args, **kwargs)
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 4164, in _select_contiguous
_check_valid_indices_value(start, len(self))
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 624, in _check_valid_indices_value
raise IndexError(f"Index {index} out of range for dataset of size {size}.")
IndexError: Index 1 out of range for dataset of size 1.
```
### Expected behavior
should pass
### Environment info
datasets==4.4.2
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7911/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7910
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7910/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7910/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7910/events
|
https://github.com/huggingface/datasets/pull/7910
| 3,749,894,414
|
PR_kwDODunzps658oGv
| 7,910
|
Enhance cast_column() with cast_kwargs parameter
|
{
"login": "Moenupa",
"id": 49304833,
"node_id": "MDQ6VXNlcjQ5MzA0ODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/49304833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Moenupa",
"html_url": "https://github.com/Moenupa",
"followers_url": "https://api.github.com/users/Moenupa/followers",
"following_url": "https://api.github.com/users/Moenupa/following{/other_user}",
"gists_url": "https://api.github.com/users/Moenupa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Moenupa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moenupa/subscriptions",
"organizations_url": "https://api.github.com/users/Moenupa/orgs",
"repos_url": "https://api.github.com/users/Moenupa/repos",
"events_url": "https://api.github.com/users/Moenupa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Moenupa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-20T10:09:11
| 2026-02-08T10:28:57
| null |
NONE
| null | null | null | null |
Fixes #7909, #7766.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7910/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7910",
"html_url": "https://github.com/huggingface/datasets/pull/7910",
"diff_url": "https://github.com/huggingface/datasets/pull/7910.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7910.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7909
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7909/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7909/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7909/events
|
https://github.com/huggingface/datasets/issues/7909
| 3,749,885,131
|
I_kwDODunzps7fgrTL
| 7,909
|
Support cast_kwargs in cast_columns
|
{
"login": "Moenupa",
"id": 49304833,
"node_id": "MDQ6VXNlcjQ5MzA0ODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/49304833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Moenupa",
"html_url": "https://github.com/Moenupa",
"followers_url": "https://api.github.com/users/Moenupa/followers",
"following_url": "https://api.github.com/users/Moenupa/following{/other_user}",
"gists_url": "https://api.github.com/users/Moenupa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Moenupa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moenupa/subscriptions",
"organizations_url": "https://api.github.com/users/Moenupa/orgs",
"repos_url": "https://api.github.com/users/Moenupa/repos",
"events_url": "https://api.github.com/users/Moenupa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Moenupa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null |
[] | 2025-12-20T10:02:07
| 2025-12-20T10:28:01
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Feature request
expose `cast(**cast_kwargs)` to `cast_column()`
https://github.com/huggingface/datasets/blob/0feb65dd8733191dd2d1e74215b422fc5939a56a/src/datasets/arrow_dataset.py#L2205
### Motivation
`cast_column()` wraps `cast()` function without exposing any `cast()` args. For large multi-modal datasets, e.g.
```py
# a dataset with list[{"bytes"}: b'', ...], much more than one image
load_dataset("MLLM-CL/VTCBench").cast_column("images", List(Image(decode=False)))
```
This would fail due to #6206, #7167, where the default value `1000` for batch size in `cast()` is too large and causes `pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays`.
https://github.com/huggingface/datasets/blob/0feb65dd8733191dd2d1e74215b422fc5939a56a/src/datasets/arrow_dataset.py#L2164-L2205
### Your contribution
#7910
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7909/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7908
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7908/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7908/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7908/events
|
https://github.com/huggingface/datasets/pull/7908
| 3,747,829,610
|
PR_kwDODunzps651xlf
| 7,908
|
set dev version
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7908). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-12-19T15:06:21
| 2025-12-19T15:11:05
| 2025-12-19T15:06:29
|
MEMBER
| null | null | null | null | null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7908/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7908",
"html_url": "https://github.com/huggingface/datasets/pull/7908",
"diff_url": "https://github.com/huggingface/datasets/pull/7908.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7908.patch",
"merged_at": "2025-12-19T15:06:29"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7907
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7907/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7907/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7907/events
|
https://github.com/huggingface/datasets/pull/7907
| 3,747,818,613
|
PR_kwDODunzps651vMp
| 7,907
|
release: 4.4.2
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7907). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-12-19T15:02:23
| 2025-12-19T15:06:46
| 2025-12-19T15:03:22
|
MEMBER
| null | null | null | null | null |
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7907/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7907",
"html_url": "https://github.com/huggingface/datasets/pull/7907",
"diff_url": "https://github.com/huggingface/datasets/pull/7907.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7907.patch",
"merged_at": "2025-12-19T15:03:22"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7906
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7906/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7906/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7906/events
|
https://github.com/huggingface/datasets/pull/7906
| 3,747,764,992
|
PR_kwDODunzps651jiI
| 7,906
|
Don't save original_shard_lengths by default for backward compat
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7906). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-12-19T14:44:09
| 2025-12-19T14:57:25
| 2025-12-19T14:57:23
|
MEMBER
| null | null | null | null |
following #7897
but let users enable it with `datasets.config.SAVE_ORIGINAL_SHARD_LENGTHS = True`
this is useful for the Dataset Viewer to know where each row comes from after converting to parquet
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7906/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7906",
"html_url": "https://github.com/huggingface/datasets/pull/7906",
"diff_url": "https://github.com/huggingface/datasets/pull/7906.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7906.patch",
"merged_at": "2025-12-19T14:57:23"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7905
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7905/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7905/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7905/events
|
https://github.com/huggingface/datasets/issues/7905
| 3,734,233,245
|
I_kwDODunzps7ek-Cd
| 7,905
|
Unbounded network usage when opening Data Studio
|
{
"login": "alizaredornica-sys",
"id": 225014457,
"node_id": "U_kgDODWlyuQ",
"avatar_url": "https://avatars.githubusercontent.com/u/225014457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alizaredornica-sys",
"html_url": "https://github.com/alizaredornica-sys",
"followers_url": "https://api.github.com/users/alizaredornica-sys/followers",
"following_url": "https://api.github.com/users/alizaredornica-sys/following{/other_user}",
"gists_url": "https://api.github.com/users/alizaredornica-sys/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alizaredornica-sys/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alizaredornica-sys/subscriptions",
"organizations_url": "https://api.github.com/users/alizaredornica-sys/orgs",
"repos_url": "https://api.github.com/users/alizaredornica-sys/repos",
"events_url": "https://api.github.com/users/alizaredornica-sys/events{/privacy}",
"received_events_url": "https://api.github.com/users/alizaredornica-sys/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"cc @cfahlgren1",
"Thanks for reporting! Looking into this!",
"This should be fixed. Thank you for your patience! 🤗"
] | 2025-12-16T10:45:02
| 2026-01-06T15:04:43
| 2026-01-06T15:04:43
|
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Describe the bug
Opening the Data Studio tab on a dataset page triggers continuous and unbounded network traffic. This issue occurs across multiple browsers and continues even without user interaction.
### Steps to reproduce the bug
https://huggingface.co/datasets/slone/nllb-200-10M-sample/viewer
### Expected behavior
Data Studio should load a limited, finite amount of data and stop further network activity unless explicitly requested by the user.
### Environment info
- OS: Windows 10
- Browsers: Chrome, Firefox, Edge
- Device: Desktop
- Network: Standard broadband connection
|
{
"login": "cfahlgren1",
"id": 13546028,
"node_id": "MDQ6VXNlcjEzNTQ2MDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/13546028?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cfahlgren1",
"html_url": "https://github.com/cfahlgren1",
"followers_url": "https://api.github.com/users/cfahlgren1/followers",
"following_url": "https://api.github.com/users/cfahlgren1/following{/other_user}",
"gists_url": "https://api.github.com/users/cfahlgren1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cfahlgren1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cfahlgren1/subscriptions",
"organizations_url": "https://api.github.com/users/cfahlgren1/orgs",
"repos_url": "https://api.github.com/users/cfahlgren1/repos",
"events_url": "https://api.github.com/users/cfahlgren1/events{/privacy}",
"received_events_url": "https://api.github.com/users/cfahlgren1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7905/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7904
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7904/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7904/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7904/events
|
https://github.com/huggingface/datasets/issues/7904
| 3,727,978,498
|
I_kwDODunzps7eNHAC
| 7,904
|
Request: Review pending neuroimaging PRs (#7886 BIDS loader, #7887 lazy loading)
|
{
"login": "The-Obstacle-Is-The-Way",
"id": 175985783,
"node_id": "U_kgDOCn1Udw",
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! sure I'll be happy to take a look, sorry for the delay :)"
] | 2025-12-14T20:34:31
| 2025-12-15T11:25:29
| null |
CONTRIBUTOR
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
## Summary
I'm building production neuroimaging pipelines that depend on `datasets` and would benefit greatly from two pending PRs being reviewed/merged.
## Pending PRs
| PR | Description | Status | Open Since |
|----|-------------|--------|------------|
| [#7886](https://github.com/huggingface/datasets/pull/7886) | BIDS dataset loader | Open | Nov 29 |
| [#7887](https://github.com/huggingface/datasets/pull/7887) | Lazy loading for NIfTI | Open | Nov 29 |
## Use Case
The neuroimaging community uses the BIDS (Brain Imaging Data Structure) standard for organizing MRI/fMRI data. These PRs would enable:
1. **#7886**: `load_dataset('bids', data_dir='/path/to/bids')` - Load local BIDS directories directly
2. **#7887**: Memory-efficient NIfTI handling (single 4D fMRI file can be 1-2GB)
## Current Workaround
Without these, users must either:
- Upload to Hub first, then consume (works but slow iteration)
- Hand-roll BIDS parsing (duplicates effort)
## Request
Could a maintainer review these PRs? Happy to address any feedback. The BIDS loader has tests passing and was end-to-end tested with real OpenNeuro data.
Thank you for the great work on `Nifti()` support - these PRs build on that foundation.
## Related
- Contributes to #7804 (Support scientific data formats)
- Built on @TobiasPitters's Nifti feature work
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7904/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7904/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7903
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7903/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7903/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7903/events
|
https://github.com/huggingface/datasets/pull/7903
| 3,723,395,305
|
PR_kwDODunzps64kO0d
| 7,903
|
Docs: add minimal usage example to dataset card guidelines
|
{
"login": "an-enigma",
"id": 44645629,
"node_id": "MDQ6VXNlcjQ0NjQ1NjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/44645629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/an-enigma",
"html_url": "https://github.com/an-enigma",
"followers_url": "https://api.github.com/users/an-enigma/followers",
"following_url": "https://api.github.com/users/an-enigma/following{/other_user}",
"gists_url": "https://api.github.com/users/an-enigma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/an-enigma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/an-enigma/subscriptions",
"organizations_url": "https://api.github.com/users/an-enigma/orgs",
"repos_url": "https://api.github.com/users/an-enigma/repos",
"events_url": "https://api.github.com/users/an-enigma/events{/privacy}",
"received_events_url": "https://api.github.com/users/an-enigma/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-12T13:16:46
| 2025-12-12T13:16:46
| null |
NONE
| null | null | null | null |
Adds a short, minimal load_dataset example to the dataset card documentation to help first-time users quickly load and inspect datasets.
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7903/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7903",
"html_url": "https://github.com/huggingface/datasets/pull/7903",
"diff_url": "https://github.com/huggingface/datasets/pull/7903.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7903.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7902
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7902/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7902/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7902/events
|
https://github.com/huggingface/datasets/issues/7902
| 3,723,281,150
|
I_kwDODunzps7d7ML-
| 7,902
|
The child process retrieves the dataset directly from the main process instead of executing `memory_mapped_arrow_table_from_file`.
|
{
"login": "HQF2017",
"id": 32055029,
"node_id": "MDQ6VXNlcjMyMDU1MDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/32055029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HQF2017",
"html_url": "https://github.com/HQF2017",
"followers_url": "https://api.github.com/users/HQF2017/followers",
"following_url": "https://api.github.com/users/HQF2017/following{/other_user}",
"gists_url": "https://api.github.com/users/HQF2017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HQF2017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HQF2017/subscriptions",
"organizations_url": "https://api.github.com/users/HQF2017/orgs",
"repos_url": "https://api.github.com/users/HQF2017/repos",
"events_url": "https://api.github.com/users/HQF2017/events{/privacy}",
"received_events_url": "https://api.github.com/users/HQF2017/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
open
| false
| null |
[] | null |
[
"Memory mapping is actually the way for processes to share memory efficiently and without copy. It is efficient when on are using a local disk, and it's discouraged to use it on remote disk for the reasons you observed.\n\nWhat you can do instead is save the dataset as Parquet on your remote storage (or on Hugging Face Datasets which offers fast uploads thanks to Xet), and then your can reload it in streaming mode. Streaming mode is ideal to use a dataset that is hosted in a remote storage"
] | 2025-12-12T12:37:44
| 2025-12-15T11:48:16
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Feature request
The child process retrieves the dataset directly from the main process instead of executing `memory_mapped_arrow_table_from_file`.
### Motivation
Because my local disk space is insufficient, I can only store a dataset on a remote Ceph server and process it using datasets.
I used the data-juicer[https://github.com/datajuicer/data-juicer] framework as an outer layer which uses datasets, but it doesn't support streaming datasets. I then encountered a problem: for each load, map, and filter operation, I had to wait for a large number of child processes to execute `memory_mapped_arrow_table_from_file`. Since the actual file was on the remote Ceph server, this operation was limited by network I/O.
I don't know if it's a problem with my usage or if this is how datasets are currently designed.However, I think that if the instances obtained after datasets.load_datasets are directly passed to the child process instead of re-executing `memory_mapped_arrow_table_from_file`, it might solve my problem.Or datasets already support this capability, but I just didn't know it?
### Your contribution
。。。
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7902/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7901
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7901/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7901/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7901/events
|
https://github.com/huggingface/datasets/issues/7901
| 3,722,243,543
|
I_kwDODunzps7d3O3X
| 7,901
|
ShuffledDataSourcesArrowExamplesIterable cannot properly resume from checkpoint
|
{
"login": "howitry",
"id": 61858900,
"node_id": "MDQ6VXNlcjYxODU4OTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/61858900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/howitry",
"html_url": "https://github.com/howitry",
"followers_url": "https://api.github.com/users/howitry/followers",
"following_url": "https://api.github.com/users/howitry/following{/other_user}",
"gists_url": "https://api.github.com/users/howitry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/howitry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/howitry/subscriptions",
"organizations_url": "https://api.github.com/users/howitry/orgs",
"repos_url": "https://api.github.com/users/howitry/repos",
"events_url": "https://api.github.com/users/howitry/events{/privacy}",
"received_events_url": "https://api.github.com/users/howitry/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! As you can read in the logs, the shuffle buffer content is lost when resuming a shuffled dataset. The default size is 1000 examples, but you can tweak it\n\ne.g. if you run your code with this\n\n```diff\n- ds = Dataset.from_dict({\"a\": range(12)}).to_iterable_dataset(num_shards=1)\n- ds = ds.shuffle(seed=42)\n+ ds = Dataset.from_dict({\"a\": range(100)}).to_iterable_dataset(num_shards=1)\n+ ds = ds.shuffle(seed=42, buffer_size=10)\n```\n\nthen you get\n\n```\n{'a': 0}\n{'a': 7}\n{'a': 6}\ncheckpoint\nrestart from checkpoint\nLoading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.\n{'a': 17}\n{'a': 15}\n{'a': 24}\n{'a': 19}\n{'a': 21}\n{'a': 23}\n...\n```\n\nwhere you only lose 10 rows ([1, 2, 3, 4, 5, 8, 9, 10, 11, 12])",
"> Hi ! As you can read in the logs, the shuffle buffer content is lost when resuming a shuffled dataset. The default size is 1000 examples, but you can tweak it\n> \n> e.g. if you run your code with this\n> \n> - ds = Dataset.from_dict({\"a\": range(12)}).to_iterable_dataset(num_shards=1)\n> - ds = ds.shuffle(seed=42)\n> + ds = Dataset.from_dict({\"a\": range(100)}).to_iterable_dataset(num_shards=1)\n> + ds = ds.shuffle(seed=42, buffer_size=10)\n> then you get\n> \n> ```\n> {'a': 0}\n> {'a': 7}\n> {'a': 6}\n> checkpoint\n> restart from checkpoint\n> Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.\n> {'a': 17}\n> {'a': 15}\n> {'a': 24}\n> {'a': 19}\n> {'a': 21}\n> {'a': 23}\n> ...\n> ```\n> \n> where you only lose 10 rows ([1, 2, 3, 4, 5, 8, 9, 10, 11, 12])\n\nThank you for your answer. So, when ShuffledDataSourcesArrowExamplesIterable resumes training, it will definitely discard unused data in buffer_size?",
"Yes correct. This is because the state_dict doesn't save the content of the buffer, so when resuming the buffer starts empty and the examples that were in the buffer are lost."
] | 2025-12-12T06:57:32
| 2025-12-16T19:34:46
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Describe the bug
ShuffledDataSourcesArrowExamplesIterable cannot properly resume from checkpoint
### Steps to reproduce the bug
1. The reproducible code is as follows:
```
from datasets import Dataset, concatenate_datasets, interleave_datasets
ds = Dataset.from_dict({"a": range(12)}).to_iterable_dataset(num_shards=1)
ds = ds.shuffle(seed=42)
for idx, example in enumerate(ds):
print(example)
if idx == 2: #The checkpoint can be loaded correctly only when idx <= 1.
state_dict = ds.state_dict()
print("checkpoint")
break
print("state_dict: ",state_dict)
ds.load_state_dict(state_dict)
print(f"restart from checkpoint")
for example in ds:
print(example)
```
2. The error message is as follows:
```
{'a': 0}
{'a': 7}
{'a': 6}
checkpoint
state_dict: {'examples_iterable': {'examples_iterable': {'examples_iterable': {'shard_idx': 1, 'shard_example_idx': 0, 'type': 'ShuffledDataSourcesArrowExamplesIterable'}, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 0, 'type': 'ShuffledDataSourcesArrowExamplesIterable'}, 'batch_idx': 12, 'num_chunks_since_previous_state': 12, 'cropped_chunk_length': 0, 'type': 'RebatchedArrowExamplesIterable'}, 'previous_state': {'examples_iterable': {'shard_idx': 1, 'shard_example_idx': 0, 'type': 'ShuffledDataSourcesArrowExamplesIterable'}, 'previous_state': {'shard_idx': 0, 'shard_example_idx': 0, 'type': 'ShuffledDataSourcesArrowExamplesIterable'}, 'batch_idx': 12, 'num_chunks_since_previous_state': 12, 'cropped_chunk_length': 0, 'type': 'RebatchedArrowExamplesIterable'}, 'batch_idx': 3, 'num_chunks_since_previous_state': 2, 'cropped_chunk_length': 0, 'type': 'RebatchedArrowExamplesIterable'}, 'epoch': 0}
restart from checkpoint
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
```
### Expected behavior
I want a correct resume from any checkpoint, but currently the checkpoint can only be loaded correctly when idx <= 1.
### Environment info
datasets Version: 4.4.1
@lhoestq
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/7901/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7900
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7900/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7900/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7900/events
|
https://github.com/huggingface/datasets/issues/7900
| 3,711,751,590
|
I_kwDODunzps7dPNWm
| 7,900
|
`Permission denied` when sharing cache between users
|
{
"login": "qthequartermasterman",
"id": 19497738,
"node_id": "MDQ6VXNlcjE5NDk3NzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/19497738?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qthequartermasterman",
"html_url": "https://github.com/qthequartermasterman",
"followers_url": "https://api.github.com/users/qthequartermasterman/followers",
"following_url": "https://api.github.com/users/qthequartermasterman/following{/other_user}",
"gists_url": "https://api.github.com/users/qthequartermasterman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qthequartermasterman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qthequartermasterman/subscriptions",
"organizations_url": "https://api.github.com/users/qthequartermasterman/orgs",
"repos_url": "https://api.github.com/users/qthequartermasterman/repos",
"events_url": "https://api.github.com/users/qthequartermasterman/events{/privacy}",
"received_events_url": "https://api.github.com/users/qthequartermasterman/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null |
[
"I remember a fix from last year to usr the current umask for filelock 3.10.0, which filelock version are you using ? can you try another version ?",
"I believe we we are using `filelock==3.19.1`. Do you have a recommended version to use?",
"Our test suite has been testing all versions over time for years now, including 3.19.1. And we didn't get failures for our tests.\n\nI'm not sure but this could be an actual permission error unrelated to filelock then"
] | 2025-12-09T16:41:47
| 2026-01-09T11:08:36
| null |
NONE
| null | null |
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
|
{
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
}
|
### Describe the bug
We want to use `datasets` and `transformers` on a shared machine. Right now, each user has a separate HF_HOME in their home directory. To reduce duplicates of the datasets, we want to share that cache. While experimenting, we are running into `Permission denied` errors.
It looks like this was supported in the past (see #6589)?
Is there a correct way to share caches across users?
### Steps to reproduce the bug
1. Create a directory `/models/hf_hub_shared_experiment` with read/write permissions for two different users
2. For each user run the script below
```python
import os
os.environ["HF_HOME"] = "/models/hf_hub_shared_experiment"
os.environ["HF_DATASETS_CACHE"] = "/models/hf_hub_shared_experiment/data"
import datasets
import transformers
DATASET = "tatsu-lab/alpaca"
MODEL = "meta-llama/Llama-3.2-1B-Instruct"
model = transformers.AutoModelForCausalLM.from_pretrained(MODEL)
tokenizer = transformers.AutoTokenizer.from_pretrained(MODEL)
dataset = datasets.load_dataset(DATASET)
```
The first user is able to download and use the model and dataset. The second user gets these errors:
```
$ python ./experiment_with_shared.py
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/models--meta-llama--Llama-3.2-1B-Instruct/.no_exist/9213176726f574b556790deb65791e0c5aa438b6/custom_generate/generate.py'
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/alpaca.py'
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/.huggingface.yaml'
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/dataset_infos.json'
Traceback (most recent call last):
File "/home/user2/.venv/experiment_with_shared.py", line 17, in <module>
dataset = datasets.load_dataset(DATASET)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user2/.venv/lib/python3.12/site-packages/datasets/load.py", line 1397, in load_dataset
builder_instance = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
File "/home/user2/.venv/lib/python3.12/site-packages/datasets/load.py", line 1171, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
^^^^^^^^^^^^
File "/home/user2/.venv/lib/python3.12/site-packages/datasets/builder.py", line 390, in __init__
with FileLock(lock_path):
File "/home/user2/.venv/lib/python3.12/site-packages/filelock/_api.py", line 377, in __enter__
self.acquire()
File "/home/user2/.venv/lib/python3.12/site-packages/filelock/_api.py", line 333, in acquire
self._acquire()
File "/home/user2/.venv/lib/python3.12/site-packages/filelock/_unix.py", line 45, in _acquire
fd = os.open(self.lock_file, open_flags, self._context.mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PermissionError: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/data/_models_hf_hub_shared_experiment_data_tatsu-lab___alpaca_default_0.0.0_dce01c9b08f87459cf36a430d809084718273017.lock'
```
### Expected behavior
The second user should be able to read the shared cache files.
### Environment info
$ datasets-cli env
- `datasets` version: 4.4.1
- Platform: Linux-6.8.0-88-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.36.0
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.10.0
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/7900/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
}
|
https://api.github.com/repos/huggingface/datasets/issues/7900/timeline
| null | null | null | null | false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.