url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
dict | comments
list | created_at
timestamp[ns, tz=UTC] | updated_at
timestamp[ns, tz=UTC] | closed_at
timestamp[ns, tz=UTC] | author_association
string | type
float64 | active_lock_reason
float64 | draft
float64 | pull_request
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
float64 | state_reason
string | sub_issues_summary
dict | issue_dependencies_summary
dict | is_pull_request
bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7945
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7945/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7945/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7945/events
|
https://github.com/huggingface/datasets/pull/7945
| 3,814,399,493
|
PR_kwDODunzps69ODBk
| 7,945
|
set dev version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7945). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-14T18:34:50
| 2026-01-14T18:37:30
| 2026-01-14T18:34:56
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7945.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7945",
"merged_at": "2026-01-14T18:34:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7945.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7945"
}
| null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7945/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7945/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7944
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7944/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7944/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7944/events
|
https://github.com/huggingface/datasets/pull/7944
| 3,814,369,524
|
PR_kwDODunzps69N8bB
| 7,944
|
Release: 4.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7944). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-14T18:27:17
| 2026-01-14T18:30:23
| 2026-01-14T18:28:25
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7944.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7944",
"merged_at": "2026-01-14T18:28:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7944.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7944"
}
| null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7944/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7944/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7943
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7943/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7943/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7943/events
|
https://github.com/huggingface/datasets/pull/7943
| 3,809,778,662
|
PR_kwDODunzps68-rLO
| 7,943
|
Add _generate_shards
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7943). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-13T17:10:03
| 2026-01-14T16:46:53
| 2026-01-14T16:46:51
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7943.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7943",
"merged_at": "2026-01-14T16:46:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7943.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7943"
}
|
Useful to list a dataset's shards:
```python
from datasets import load_dataset_builder, StreamingDownloadManager
dlm = StreamingDownloadManager()
def get_shards(dataset_name, *args, **kwargs):
b = load_dataset_builder(dataset_name, *args, **kwargs)
splits = b._split_generators(dlm)
return list(b._generate_shards(**splits[0].gen_kwargs))
print(get_shards("username/dataset_name"))
# ['hf://datasets/...', ...]
```
I'll use this in combination with https://github.com/huggingface/datasets/pull/7897 in the Dataset Viewer for the and API endpoint that does {dataset, config, split, offset, limit} -> [{fileUri, offset, limit}]. This will be useful to edit datasets since your can get a row's location inside a dataset cc @cfahlgren1
This will be similar to https://github.com/huggingface/dataset-viewer/pull/3276 but works for any dataset format: csv, json, webdataset, images etc.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7943/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7943/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7942
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7942/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7942/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7942/events
|
https://github.com/huggingface/datasets/pull/7942
| 3,808,890,451
|
PR_kwDODunzps687sR_
| 7,942
|
add _OverridableIOWrapper
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7942). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-13T13:37:09
| 2026-01-13T13:40:21
| 2026-01-13T13:38:02
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7942.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7942",
"merged_at": "2026-01-13T13:38:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7942.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7942"
}
|
fix https://github.com/huggingface/datasets/issues/7936
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7942/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7942/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7941
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7941/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7941/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7941/events
|
https://github.com/huggingface/datasets/pull/7941
| 3,807,800,603
|
PR_kwDODunzps684EZa
| 7,941
|
Remove Python 3.7 and Python 2 code paths from _dill.py
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4872288?v=4",
"events_url": "https://api.github.com/users/tboerstad/events{/privacy}",
"followers_url": "https://api.github.com/users/tboerstad/followers",
"following_url": "https://api.github.com/users/tboerstad/following{/other_user}",
"gists_url": "https://api.github.com/users/tboerstad/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tboerstad",
"id": 4872288,
"login": "tboerstad",
"node_id": "MDQ6VXNlcjQ4NzIyODg=",
"organizations_url": "https://api.github.com/users/tboerstad/orgs",
"received_events_url": "https://api.github.com/users/tboerstad/received_events",
"repos_url": "https://api.github.com/users/tboerstad/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tboerstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tboerstad/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tboerstad",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-13T08:44:31
| 2026-01-13T08:44:31
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7941.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7941",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7941.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7941"
}
|
This PR simplifies the code pickle handling to only support Python 3.9+.
Datasets requires Python 3.9+ (since PR #7474).
There's some dill specific code branches checking for earlier versions of python which can be removed.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7941/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7941/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7940
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7940/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7940/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7940/events
|
https://github.com/huggingface/datasets/pull/7940
| 3,807,386,503
|
PR_kwDODunzps682sBj
| 7,940
|
Improve readability and documentation of indexing integration tests
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/115862867?v=4",
"events_url": "https://api.github.com/users/DeeptiAgarwal16/events{/privacy}",
"followers_url": "https://api.github.com/users/DeeptiAgarwal16/followers",
"following_url": "https://api.github.com/users/DeeptiAgarwal16/following{/other_user}",
"gists_url": "https://api.github.com/users/DeeptiAgarwal16/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DeeptiAgarwal16",
"id": 115862867,
"login": "DeeptiAgarwal16",
"node_id": "U_kgDOBuftUw",
"organizations_url": "https://api.github.com/users/DeeptiAgarwal16/orgs",
"received_events_url": "https://api.github.com/users/DeeptiAgarwal16/received_events",
"repos_url": "https://api.github.com/users/DeeptiAgarwal16/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DeeptiAgarwal16/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DeeptiAgarwal16/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DeeptiAgarwal16",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-13T06:42:07
| 2026-01-13T06:42:07
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7940.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7940",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7940.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7940"
}
|
### Summary
This PR improves the readability and maintainability of the indexing integration tests by adding clear, detailed comments throughout the test suite.
### Motivation
The indexing tests cover multiple backends (FAISS and Elasticsearch) and involve non-trivial workflows such as vector creation, indexing, querying, and serialization. Adding explanatory comments helps new contributors and reviewers understand the intent of each test case more easily.
### What’s Changed
- Added descriptive docstrings to test classes and methods
- Included inline comments explaining:
- Dataset construction
- Index creation and configuration
- Search and batch search behavior
- Error handling and validation logic
- Serialization and cleanup steps
- No functional or behavioral changes
### Scope
- Documentation and readability improvements only
- No changes to test logic, APIs, or expected behavior
### Impact
This change lowers the barrier for new contributors, improves code comprehension, and makes future maintenance easier without affecting test coverage or performance.
### Checklist
- [x] No functional changes introduced
- [x] Tests pass locally
- [x] Follows existing project style and conventions
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7940/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7940/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7939
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7939/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7939/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7939/events
|
https://github.com/huggingface/datasets/issues/7939
| 3,806,889,870
|
I_kwDODunzps7i6IeO
| 7,939
|
datasets.load_from_disk progress bar optional manual control
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/60286968?v=4",
"events_url": "https://api.github.com/users/Tommigun1980/events{/privacy}",
"followers_url": "https://api.github.com/users/Tommigun1980/followers",
"following_url": "https://api.github.com/users/Tommigun1980/following{/other_user}",
"gists_url": "https://api.github.com/users/Tommigun1980/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Tommigun1980",
"id": 60286968,
"login": "Tommigun1980",
"node_id": "MDQ6VXNlcjYwMjg2OTY4",
"organizations_url": "https://api.github.com/users/Tommigun1980/orgs",
"received_events_url": "https://api.github.com/users/Tommigun1980/received_events",
"repos_url": "https://api.github.com/users/Tommigun1980/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Tommigun1980/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tommigun1980/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Tommigun1980",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2026-01-13T03:19:13
| 2026-01-13T03:19:13
| null |
NONE
| null | null | null | null |
### Feature request
This is tangentially related to [https://github.com/huggingface/datasets/issues/7918](https://github.com/huggingface/datasets/issues/7918).
When loading a dataset with > 16 files a progress bar is shown (unless stdout is redirected or [https://github.com/huggingface/datasets/pull/7919](https://github.com/huggingface/datasets/pull/7919) is merged).
However, if you use multiple processes with data sharding, where each core loads the dataset, you get multiple copies of the progress bar (all fighting each other). It would be greatly appreciated if `datasets.load_from_disk` accepted an argument for whether to show a progress bar; default could be `None` which would retain current functionality (i.e. show if > 16 files in dataset), but user could also force the progress bar on or off as needed. Essentially just expose the progress bar visibility argument to the method's argument so that user can control it instead of it being hardcoded, where `None` would be default argument and retain current functionality.
### Motivation
Progress bar could be forced off in all processes than one, to avoid progress bar fighting and log spam.
Progress bar could also be manually forced on and off for other use cases.
### Your contribution
Possibly do a PR if this is accepted.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7939/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7939/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7938
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7938/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7938/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7938/events
|
https://github.com/huggingface/datasets/pull/7938
| 3,804,486,642
|
PR_kwDODunzps68tYX1
| 7,938
|
Fix method to retrieve attributes from file object
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7938). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-12T14:08:31
| 2026-01-12T14:13:12
| 2026-01-12T14:10:12
|
MEMBER
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7938.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7938",
"merged_at": "2026-01-12T14:10:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7938.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7938"
}
|
fix http://github.com/huggingface/datasets/issues/7936
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7938/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7938/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7937
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7937/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7937/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7937/events
|
https://github.com/huggingface/datasets/pull/7937
| 3,803,185,984
|
PR_kwDODunzps68pBId
| 7,937
|
Fix duplicate log messages by disabling log propagation by default
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/4872288?v=4",
"events_url": "https://api.github.com/users/tboerstad/events{/privacy}",
"followers_url": "https://api.github.com/users/tboerstad/followers",
"following_url": "https://api.github.com/users/tboerstad/following{/other_user}",
"gists_url": "https://api.github.com/users/tboerstad/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tboerstad",
"id": 4872288,
"login": "tboerstad",
"node_id": "MDQ6VXNlcjQ4NzIyODg=",
"organizations_url": "https://api.github.com/users/tboerstad/orgs",
"received_events_url": "https://api.github.com/users/tboerstad/received_events",
"repos_url": "https://api.github.com/users/tboerstad/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tboerstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tboerstad/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tboerstad",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-12T08:03:18
| 2026-01-13T08:10:24
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7937.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7937",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7937.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7937"
}
|
This PR fixes an issue where applications that configure logging see duplicate messages from `datasets`:
```python
import logging
logging.basicConfig(level=logging.WARNING)
from datasets.utils.logging import get_logger
get_logger("datasets.load").warning("This appears twice")
```
Outputs:
```
This appears twice
WARNING:datasets.load:This appears twice
```
This non-standard behaviour breaks default logging behaviour. The docstring for `disable_propagation()` incorrectly says: [Note that log propagation is disabled by default](https://github.com/huggingface/datasets/blob/6a1bc355a0ca2c8f9f5c10698215212f0f14e7b7/src/datasets/utils/logging.py#L161C1-L162C1)
Perhaps this was copied over from `transformers`, which disables log propogation by default, unless it's running under CI? [library_root_logger.propagate = is_ci
](https://github.com/huggingface/transformers/blob/37974267efefe020168ff27081fbab8bbce04720/src/transformers/utils/logging.py#L103)
To restore the old behaviour, users can do:
```
import datasets
datasets.logging.enable_propagation()
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7937/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7937/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7936
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7936/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7936/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7936/events
|
https://github.com/huggingface/datasets/issues/7936
| 3,795,750,271
|
I_kwDODunzps7iPo1_
| 7,936
|
_add_retries_to_file_obj_read_method makes file_obj invalid for pyarrow
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/73142299?v=4",
"events_url": "https://api.github.com/users/li-yi-dong/events{/privacy}",
"followers_url": "https://api.github.com/users/li-yi-dong/followers",
"following_url": "https://api.github.com/users/li-yi-dong/following{/other_user}",
"gists_url": "https://api.github.com/users/li-yi-dong/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/li-yi-dong",
"id": 73142299,
"login": "li-yi-dong",
"node_id": "MDQ6VXNlcjczMTQyMjk5",
"organizations_url": "https://api.github.com/users/li-yi-dong/orgs",
"received_events_url": "https://api.github.com/users/li-yi-dong/received_events",
"repos_url": "https://api.github.com/users/li-yi-dong/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/li-yi-dong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/li-yi-dong/subscriptions",
"type": "User",
"url": "https://api.github.com/users/li-yi-dong",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"hmm not sure how to fix this, I believe `file_obj.__getattr__ = lambda _, attr: getattr(orig_file_obj, attr)` would make all the methods point to the original file_obj",
"> hmm not sure how to fix this, I believe `file_obj.__getattr__ = lambda _, attr: getattr(orig_file_obj, attr)` would make all the methods point to the original file_obj\n\nCould you verify by executing\n```python\nfrom datasets.utils.file_utils import xopen\nf = xopen('hdfs://xxxx.parquet', 'rb')\nf.readable()\n```\nIf it's indeed a bug, I think all data files that using pyarrow would break.",
"Just found the issue and merged a quick fix, feel free to install `datasets` from source and let me know if it works !",
"> Just found the issue and merged a quick fix, feel free to install `datasets` from source and let me know if it works !\n\nIt still not working 🥹\n\n<img width=\"1216\" height=\"348\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/a68e8f3d-2491-4616-9777-951c02c88580\" />\n\n<img width=\"1780\" height=\"962\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/9ae8f799-0d24-40ac-8cae-6f5a77d84dec\" />",
"Arf sorry ! I opened https://github.com/huggingface/datasets/pull/7942, hopefully it's alright now ^^' feel free to try it out",
"It works. Thx a lot.",
"Yay !"
] | 2026-01-09T07:05:25
| 2026-01-14T13:47:50
| 2026-01-13T13:38:03
|
NONE
| null | null | null | null |
### Describe the bug
I'm trying to use `load_dataset` to construct a dataset that read parquet data on HDFS streamingly, like
```python
ds = load_dataset(
"parquet",
data_files={
"train": "hdfs://xxx/train*.parquet",
"test": "hdfs://xxx/test*.parquet"
},
streaming=True,
)
```
I encountered an error
<img width="1784" height="662" alt="Image" src="https://github.com/user-attachments/assets/14f25602-ef37-4a84-83fc-dac426451163" />
In file src/datasets/packaged_modules/parquet/parquet.py,
```python
with open(file, "rb") as f:
self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f))
```
The `open` is replaced with `xopen` in src/datasets/utils/file_utils.py
In the func `_add_retries_to_file_obj_read_method`, the original file object would be replaced by io.RawIOBase(). Even though it tried to proxy all methods back to the original file object, it still unusable for pyarrow.
```python
try:
file_obj.read = read_with_retries
except AttributeError: # read-only attribute
orig_file_obj = file_obj
file_obj = io.RawIOBase()
file_obj.read = read_with_retries
file_obj.__getattr__ = lambda _, attr: getattr(orig_file_obj, attr)
return file_obj
```
For example, the original `file_obj.readable() == True`, while the new `file_obj.readable() == False`
### Steps to reproduce the bug
```python
from datasets.utils.file_utils import xopen
f = xopen('hdfs://xxxx.parquet', 'rb')
f.readable()
```
### Expected behavior
Not sure
### Environment info
Datasets 4.4.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7936/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7936/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7935
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7935/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7935/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7935/events
|
https://github.com/huggingface/datasets/pull/7935
| 3,795,376,274
|
PR_kwDODunzps68QDVY
| 7,935
|
Bug fix: Add HDFS hostname to protocol prefix
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/73142299?v=4",
"events_url": "https://api.github.com/users/li-yi-dong/events{/privacy}",
"followers_url": "https://api.github.com/users/li-yi-dong/followers",
"following_url": "https://api.github.com/users/li-yi-dong/following{/other_user}",
"gists_url": "https://api.github.com/users/li-yi-dong/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/li-yi-dong",
"id": 73142299,
"login": "li-yi-dong",
"node_id": "MDQ6VXNlcjczMTQyMjk5",
"organizations_url": "https://api.github.com/users/li-yi-dong/orgs",
"received_events_url": "https://api.github.com/users/li-yi-dong/received_events",
"repos_url": "https://api.github.com/users/li-yi-dong/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/li-yi-dong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/li-yi-dong/subscriptions",
"type": "User",
"url": "https://api.github.com/users/li-yi-dong",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! is it related to https://github.com/huggingface/datasets/issues/7934 ?\r\n\r\nIt's not clear to me why the protocol would need this, given hostname should be present in `pattern` already\r\n\r\n```python\r\nresolve_pattern(\"hdfs://hostname/user/xxx\", ...)\r\n```",
"> Hi ! is it related to #7934 ?\r\n> \r\n> It's not clear to me why the protocol would need this, given hostname should be present in `pattern` already\r\n> \r\n> ```python\r\n> resolve_pattern(\"hdfs://hostname/user/xxx\", ...)\r\n> ```\r\n\r\nIt's related to #7934 in a subttle way. In my use case, I need to specify the hdfs hostname. In theory, I can do it by\r\n```python\r\nds = load_dataset(\r\n \"parquet\",\r\n data_files={\r\n \"train\": \"hdfs://hostname/xxx*.parquet\",\r\n },\r\n streaming=True,\r\n)\r\n```\r\nor\r\n```python\r\nds = load_dataset(\r\n \"parquet\",\r\n data_files={\r\n \"train\": \"hdfs:///xxx*.parquet\",\r\n },\r\n streaming=True,\r\n storage_options={\r\n \"host\": \"hostname\"\r\n }\r\n)\r\n```\r\nNone of them work.\r\nThe first one does not work due to what this PR trying to fix, and the second one due to #7934.\r\n\r\nYes, `resolve_pattern` would be called like `resolve_pattern(\"hdfs://hostname/user/xxx\", ...)`, but its out put would be like `hdfs:///user/xxx`, no hostname in it. This output would be passed to later file operation like `fsspec.open()`. It needs the hostname in the url to find the HDFS cluster correctly.",
"@lhoestq \r\nHi! Is there any concern?🙃",
"I see, I think the path forward is to fix https://github.com/huggingface/datasets/issues/7934 which sounds like an actual xPath bug, while resolve_pattern dropping the hostname comes from fsspec HDFS implementation that we should probably try to follow",
"Fixing #7934 alone can solve my problem. \r\n\r\nBut I don't think fsspec intends to drop the hostname. Function `resolve_pattern` here is supposed to convert a pattern to absolute file paths, and keeping the protocol intouched. `fs.glob` just returns the absolute paths to files, of which no hostname should in the result. The problem is how the function `resolve_pattern` reconstructs the whole path, ignoring the HDFS hostname in the protocol.\r\n\r\nFrom another point of view, in `resolve_pattern` `fs.glob` is call with `hdfs://hostname/user/xxx` but latter `fs.open` is called with `hdfs:///user/xxx`, which is inconsistent.\r\n\r\n"
] | 2026-01-09T03:59:45
| 2026-01-15T04:06:17
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7935.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7935",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7935.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7935"
}
|
For HDFS url with hostname like `hdfs://hostname/user/xxx`, the function `resolve_pattern` would drop the hostname, and outputs `hdfs:///user/xxx`. This may break later file operations by trying to connect to wrong HDFS cluster.
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7935/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7935/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7934
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7934/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7934/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7934/events
|
https://github.com/huggingface/datasets/issues/7934
| 3,792,642,445
|
I_kwDODunzps7iDyGN
| 7,934
|
xPath cannot handle hdfs:///xxxx properly
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/73142299?v=4",
"events_url": "https://api.github.com/users/li-yi-dong/events{/privacy}",
"followers_url": "https://api.github.com/users/li-yi-dong/followers",
"following_url": "https://api.github.com/users/li-yi-dong/following{/other_user}",
"gists_url": "https://api.github.com/users/li-yi-dong/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/li-yi-dong",
"id": 73142299,
"login": "li-yi-dong",
"node_id": "MDQ6VXNlcjczMTQyMjk5",
"organizations_url": "https://api.github.com/users/li-yi-dong/orgs",
"received_events_url": "https://api.github.com/users/li-yi-dong/received_events",
"repos_url": "https://api.github.com/users/li-yi-dong/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/li-yi-dong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/li-yi-dong/subscriptions",
"type": "User",
"url": "https://api.github.com/users/li-yi-dong",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2026-01-08T12:14:11
| 2026-01-08T12:14:11
| null |
NONE
| null | null | null | null |
### Describe the bug
_as_str('hdfs:///xxxx') would return hdfs://xxxx. Removing one / and making the path invalid.
For the use case like
```
ds = load_dataset(
"parquet",
data_files={
"train": "hdfs:///user/path/to/data/train*.parquet",
},
streaming=True,
storage_options={
"host": "hostname",
}
)
```
would get
```
File "/usr/local/lib/python3.11/site-packages/datasets/load.py", line 1511, in load_dataset
return builder_instance.as_streaming_dataset(split=split)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/datasets/builder.py", line 1193, in as_streaming_dataset
splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/datasets/packaged_modules/parquet/parquet.py", line 123, in _split_generators
with open(file, "rb") as f:
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/datasets/streaming.py", line 73, in wrapper
return function(*args, download_config=download_config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/datasets/utils/file_utils.py", line 963, in xopen
file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/fsspec/core.py", line 508, in open
out = open_files(
^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/fsspec/core.py", line 295, in open_files
fs, fs_token, paths = get_fs_token_paths(
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/fsspec/core.py", line 672, in get_fs_token_paths
chain = _un_chain(urlpath0, storage_options or {})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/fsspec/core.py", line 365, in _un_chain
kw = dict(
^^^^^
TypeError: dict() got multiple values for keyword argument 'host'
```
Due to the file passed to fsspec.open is hdfs://user/path/to/data/trainxxx.parquet, and fsspec would take user as the hostname
### Steps to reproduce the bug
<img width="992" height="148" alt="Image" src="https://github.com/user-attachments/assets/98e1dac2-e81b-4727-bf7a-55faaf0c8168" />
### Expected behavior
Keep all three /
### Environment info
datasets 4.4.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7934/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7934/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7933
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7933/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7933/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7933/events
|
https://github.com/huggingface/datasets/pull/7933
| 3,780,607,384
|
PR_kwDODunzps67fNaP
| 7,933
|
feat: Add Apache TsFile format support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/186699478?v=4",
"events_url": "https://api.github.com/users/sinanshamsudheen/events{/privacy}",
"followers_url": "https://api.github.com/users/sinanshamsudheen/followers",
"following_url": "https://api.github.com/users/sinanshamsudheen/following{/other_user}",
"gists_url": "https://api.github.com/users/sinanshamsudheen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sinanshamsudheen",
"id": 186699478,
"login": "sinanshamsudheen",
"node_id": "U_kgDOCyDO1g",
"organizations_url": "https://api.github.com/users/sinanshamsudheen/orgs",
"received_events_url": "https://api.github.com/users/sinanshamsudheen/received_events",
"repos_url": "https://api.github.com/users/sinanshamsudheen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sinanshamsudheen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sinanshamsudheen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sinanshamsudheen",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7933). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-05T08:28:12
| 2026-01-06T10:26:51
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7933.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7933",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7933.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7933"
}
|
# Add Apache TsFile format support
Adds support for loading `.tsfile` datasets. Closes #7922.
## What's TsFile?
[Apache TsFile](https://tsfile.apache.org/) is a columnar time-series format popular in IoT. The TsFile community requested this integration and offered to help maintain it.
## What I did
Created a new `TsFile` builder in `packaged_modules/tsfile/` following the same pattern as HDF5. Registered the module and added `.tsfile` extension mapping. Also added `tsfile>=2.0.0` as an optional dependency.
The builder uses `tsfile.to_dataframe()` with iterator mode for memory-efficient reading, then converts to PyArrow tables. Schema is inferred automatically from file metadata.
## Config options
- `batch_size` - rows per batch (default 10000)
- `table_name` - which table to read (for multi-table files)
- `columns` - filter specific columns
- `start_time` / `end_time` - time-range filtering
## Usage
```python
from datasets import load_dataset
ds = load_dataset("tsfile", data_files=["data.tsfile"], split="train")
# with filtering
ds = load_dataset("tsfile", data_files=["data.tsfile"],
columns=["temperature"], start_time=1609459200000)
```
## Tests
Added 11 tests covering config validation, basic loading, data integrity, feature inference, and error handling. All passing.
| null |
{
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7933/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7933/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7932
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7932/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7932/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7932/events
|
https://github.com/huggingface/datasets/pull/7932
| 3,777,725,050
|
PR_kwDODunzps67WqhL
| 7,932
|
Fix duplicate keyword conflict in load_dataset_builder
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/110705207?v=4",
"events_url": "https://api.github.com/users/Ashish570raj/events{/privacy}",
"followers_url": "https://api.github.com/users/Ashish570raj/followers",
"following_url": "https://api.github.com/users/Ashish570raj/following{/other_user}",
"gists_url": "https://api.github.com/users/Ashish570raj/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Ashish570raj",
"id": 110705207,
"login": "Ashish570raj",
"node_id": "U_kgDOBpk6Nw",
"organizations_url": "https://api.github.com/users/Ashish570raj/orgs",
"received_events_url": "https://api.github.com/users/Ashish570raj/received_events",
"repos_url": "https://api.github.com/users/Ashish570raj/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Ashish570raj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ashish570raj/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Ashish570raj",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi HuggingFace team\r\nThis PR fixes issue #4910 by safely merging builder_kwargs and config_kwargs to avoid duplicate keyword errors. \r\nA regression test is included to ensure this does not happen again. \r\n\r\nPlease let me know if you’d like any changes. Thanks!\r\n"
] | 2026-01-03T05:49:06
| 2026-01-03T05:52:02
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7932.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7932",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7932.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7932"
}
|
Fixes #4910
This PR fixes a bug where passing the same keyword in builder_kwargs and
config_kwargs caused a TypeError in load_dataset_builder.
The kwargs are now merged safely so config_kwargs override builder_kwargs
without duplication. A regression test is added to prevent this from
happening again.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7932/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7932/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7931
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7931/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7931/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7931/events
|
https://github.com/huggingface/datasets/issues/7931
| 3,777,662,799
|
I_kwDODunzps7hKo9P
| 7,931
|
Enable CORS + HTTP Range support for browser partial reads on cas-bridge.xethub.hf.co (Parquet row-group access)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8352840?v=4",
"events_url": "https://api.github.com/users/cornhundred/events{/privacy}",
"followers_url": "https://api.github.com/users/cornhundred/followers",
"following_url": "https://api.github.com/users/cornhundred/following{/other_user}",
"gists_url": "https://api.github.com/users/cornhundred/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cornhundred",
"id": 8352840,
"login": "cornhundred",
"node_id": "MDQ6VXNlcjgzNTI4NDA=",
"organizations_url": "https://api.github.com/users/cornhundred/orgs",
"received_events_url": "https://api.github.com/users/cornhundred/received_events",
"repos_url": "https://api.github.com/users/cornhundred/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cornhundred/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cornhundred/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cornhundred",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Cc @assafvayner maybe ? or @cfahlgren1 @severo if you've already encountered this ?",
"OK, reproduced with hyparquet on https://huggingface.co/spaces/hyperparam/hyperparam, see https://huggingface.co/spaces/hyperparam/hyperparam?url=https%3A%2F%2Fhuggingface.co%2Fdatasets%2Ffacebook%2Fresearch-plan-gen%2Fblob%2Frefs%2Fconvert%2Fparquet%2Farxiv%2Ftest%2F0000.parquet for example\n\nError message:\n\n```\nCross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://cas-bridge.xethub.hf.co/xet-bridge-us/695138b5329f4825326ac6c8/8e12ca920d225791200b59843ddb8e469b1c3c59f92cc2a9ffdec88b16ca00f6?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=...&X-Amz-Date=20260109T124456Z&X-Amz-Expires=3600&X-Amz-Signature=...\n```\n\nNote that it works in https://hyparam.github.io/demos/hyparquet/?key=https%3A%2F%2Fhuggingface.co%2Fdatasets%2Ffacebook%2Fresearch-plan-gen%2Fresolve%2Frefs%252Fconvert%252Fparquet%2Farxiv%2Ftest%2F0000.parquet, which uses a more recent version of hyparquet.\n\nSo, my guess is:\n\n- HEAD is forbidden in the S3 bucket (clients use HEAD to get the Parquet file size),\n- a possible fix, on client side, is to use GET with a 0-byte length: https://github.com/hyparam/hyparquet/pull/137.\n\nOn HF side, we should allow HEAD on signed URLs",
"cc @coyotte508 too, just in case",
"Possibly, the solution is to add `HEAD` in `AllowedMethods` in the S3 bucket configuration, as here:\n\n```json\n[\n {\n \"AllowedHeaders\": [\n \"*\"\n ],\n \"AllowedMethods\": [\n \"HEAD\",\n \"GET\"\n ],\n \"AllowedOrigins\": [\n \"*\"\n ],\n \"ExposeHeaders\": [\n \"Content-Range\",\n \"ETag\",\n \"x-amz-checksum-crc32\"\n ]\n }\n]\n```",
"cc @assafvayner @rajatarya @Hugoch for viz / fix xet-side\n\na very annoying \"feature\" with S3 is that presigned GET / HEAD urls aren't compatible with each other, eg a presigned GET can't do HEAD calls, which led to a host of issues and hacks on our side. We even escalated to AWS a few times, without success.\n\nNote that Cloudfront is ok, we can use a presigend GET url for HEAD calls with cloudfront, just not with S3 directly.\n\nSince the CAS bridge is not an actual S3 bucket, I would love for it to be able to respond to HEAD requests with a presigned GET url.\n\n(note: maybe it's just a CORS issue and not a presigned URL issue 🤷 )",
"Thanks for reporting this @cornhundred - we (Xet team) will take on the changes to add appropriate CORS headers to our Bridge service to enable this use case.\n\n@severo : Do the repro steps still work for you? I don't see any errors when I go to https://huggingface.co/spaces/hyperparam/hyperparam?url=https%3A%2F%2Fhuggingface.co%2Fdatasets%2Ffacebook%2Fresearch-plan-gen%2Fblob%2Frefs%2Fconvert%2Fparquet%2Farxiv%2Ftest%2F0000.parquet\n\nEither @cornhundred or @severo can you give me repro steps so we can be sure we've got this fixed correctly?",
"And I've filed https://linear.app/xet/issue/XET-815/bridge-add-cors-headers-to-support-parquet-range-reads to track this issue on the Xet side. I'll keep updating this GH issue with progress, but this way we won't lose track of this.",
"It looks like the bug has been fixed indeed. The HEAD request returns 200 and the response is used by the JS client.",
"> It looks like the bug has been fixed indeed. The HEAD request returns 200 and the response is used by the JS client.\n\nWell now I'm confused, because I'm pretty sure we didn't change/deploy anything on the Xet side related to this.",
"hmm, me too. I cannot reproduce the issue.\n\nHere is a screenshot of the HEAD request, which was an error 3 days ago:\n\n<img width=\"1839\" height=\"1718\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/a2e49839-8ca5-405b-a7a4-c26489e9b417\" />\n\nThe response headers:\n\n```\nHTTP/1.1 200 OK\nContent-Length: 4086406\nConnection: keep-alive\nContent-Disposition: inline; filename*=UTF-8''0000.parquet; filename=\"0000.parquet\";\nCache-Control: public, max-age=31536000\nETag: \"8e12ca920d225791200b59843ddb8e469b1c3c59f92cc2a9ffdec88b16ca00f6\"\naccess-control-allow-origin: *\naccess-control-allow-headers: Content-Range, Content-Type, Content-Disposition, ETag\naccess-control-expose-headers: Accept-Ranges, Content-Range, Content-Type, Content-Disposition, ETag, X-Cache\nAccept-Ranges: bytes\nx-request-id: 01KEHD46KW533MGPAK7BXHY336\nDate: Fri, 09 Jan 2026 12:53:56 GMT\nX-Cache: Hit from cloudfront\nVia: 1.1 07cb86faf6a141962da4e2d7c85db038.cloudfront.net (CloudFront)\nX-Amz-Cf-Pop: CDG52-P1\nX-Amz-Cf-Id: jRjoc9PHoiv60CnDZfF6ZXc7Kwhbxc5DxtytvFpUT-EomFPDN1OjFw==\nAge: 270414\nContent-Security-Policy: default-src 'none'; sandbox\n```",
"Ah, this could no longer repro because now Cloudfront has cached this request - so the HEAD request to Cloudfront responds as expected.\n\nThe original issue is on Xet Bridge service (cas-bridge.xethub.hf.co) - maybe the issue remains that Bridge service doesn't have the appropriate CORS headers to support this request.",
"Also, I just tried with one of my private datasets. Not sure if it's related, on this URL I get an error, not with HEAD, but with the OPTIONS request.\n\n```\nXHR OPTIONS https://cas-bridge.xethub.hf.co/xet-bridge-us/655df24cde919d4162341a19/09ed3e86bf64d019919194d776abaa53b14acae6701129bb09f6169041b43f92?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas/20260112/us-east-1/s3/aws4_request&X-Amz-Date=20260112T160451Z&X-Amz-Expires=3600&X-Amz-Signature=4562a3c72667a4bedc056c87e75a2810b52b682ebe18f7ff5b6c8d4ab081cc38&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=60a76b174e24361791fe822d&response-content-disposition=inline;+filename*=UTF-8''0000.parquet;+filename=\"0000.parquet\";&x-id=GetObject&Expires=1768237491&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc2ODIzNzQ5MX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82NTVkZjI0Y2RlOTE5ZDQxNjIzNDFhMTkvMDllZDNlODZiZjY0ZDAxOTkxOTE5NGQ3NzZhYmFhNTNiMTRhY2FlNjcwMTEyOWJiMDlmNjE2OTA0MWI0M2Y5MioifV19&Signature=SsYstbIroSWXjARDLcHQyENDwkeq0l~Nsu9RvDA-x82YSnnd8dGa0wAlYiAS2STomKmZmDtTwti3RZ4Lha2dCSnNwqHJPIiF8jFFv4h5IDXm1VKzdh~14tmA1TNfpdwSdkWCpPxTgxY2kUeOJ-qldoY20Cp9K6G-GWanKYYRft~q9mlJy5E~l-CaXnRs1PBFRPj6sci-G0aCwXtjbBjUZCg2z4--e~uwLNKJHHVeDe3wUC~GNblRNwLm-EYTzINbfpm99g3t8wHRKAQJxiXZcVUsFqcULOLiVps2NPblzJNi9Y0SCgx3buEmtm~HkZ4IsjUTJj337y4MOpmZVSEvVg__&Key-Pair-Id=K2L8F4GPSG1IFC CORS Preflight Did Not Succeed\n...\nCross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://cas-bridge.xethub.hf.co/xet-bridge-us/655df24cde919d4162341a19... \n```",
"Thanks @rajatarya, here is a link to a [Google Colab notebook](https://colab.research.google.com/drive/15soyg7g3CCdlBMDDcljeiVsq_yjeJREJ#scrollTo=bG5PfGTBK7wU) where the issue can be reproduced. The notebook tries to access a Hugging Face dataset on the front-end and it only works if we set up a proxy server to avoid the CORs issues. Otherwise we see this error in the console:\n\n```\noutputframe.html?vrz=colab-external_20260109-060046_RC02_854292615:1 Access to fetch at 'https://cas-bridge.xethub.hf.co/xet-bridge-us/695b406827b0975343f6a1a2/7f3468fcbc686cda54ec68dda14cc5f677402e9e0f540772f770b96b4a687916?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=cas%2F20260112%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260112T215017Z&X-Amz-Expires=3600&X-Amz-Signature=eb126a50a8f9da1bc80110f0408f1e25b764d5d71d30502f13ece6469b487921&X-Amz-SignedHeaders=host&X-Xet-Cas-Uid=public&response-content-disposition=inline%3B+filename*%3DUTF-8%27%27chunk_0.parquet%3B+filename%3D%22chunk_0.parquet%22%3B&x-id=GetObject&Expires=1768258217&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc2ODI1ODIxN319LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2FzLWJyaWRnZS54ZXRodWIuaGYuY28veGV0LWJyaWRnZS11cy82OTViNDA2ODI3YjA5NzUzNDNmNmExYTIvN2YzNDY4ZmNiYzY4NmNkYTU0ZWM2OGRkYTE0Y2M1ZjY3NzQwMmU5ZTBmNTQwNzcyZjc3MGI5NmI0YTY4NzkxNioifV19&Signature=fZCXLc5VxfW8xKDcKK2R4CbPAVNAUgnMUjBrcrGNYebOaDMrsyjqs5bgKUcI9P67jYGLTowltxypbKrlA8UDfwKFtdT9GUtQXjAUNY%7ENjoKUacJXFuJtBeucdef5dnRu%7E4HC%7ECz2NJu69qCh0QNGNk9BH0D-83CptPhNuUHGZ%7EgT9F%7Ehe5RZR3bTSNg-6K6DSqx3JtxUu4-P5ZWSqwq4SuqAatm0019euel2wCciWc7HbYQ3b2XXkQAjLXvgLpuP-y-2JOkvG3SDJXoPamzH-wkKmBTdmCLxw%7ENHLi3F9w%7EfpSWZMn-61KR48g5E9LahtGPbRvti8Nvs-qES441DXA__&Key-Pair-Id=K2L8F4GPSG1IFC' (redirected from 'https://huggingface.co/datasets/cornhundred/Celldega_Xenium_Prime_Ovarian_Cancer_FFPE_XRun_outs_row_groups_image_chunk/resolve/main/Xenium_Prime_Ovarian_Cancer_FFPE_XRrun_outs/transcripts/chunk_0.parquet') from origin 'https://fjy31o3wnuf-496ff2e9c6d22116-0-colab.googleusercontent.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: It does not have HTTP ok status.\n```"
] | 2026-01-03T04:23:54
| 2026-01-12T21:51:41
| null |
NONE
| null | null | null | null |
### Feature request
## Summary
Browser-based data tools need Range requests to read Parquet efficiently (footer + selected row groups). Downloads from the Hub redirect to cas-bridge.xethub.hf.co (Xet bridge). The redirected host fails CORS preflight for Range/HEAD workflows, blocking partial reads. ([Hugging Face](https://huggingface.co/blog/migrating-the-hub-to-xet)). See example [HuggingFace dataset](https://huggingface.co/datasets/cornhundred/Xenium_V1_human_Pancreas_FFPE_outs_row_groups/tree/main/Xenium_V1_human_Pancreas_FFPE_outs_row_groups)
## Current behavior
Plain GET works via redirect.
Range workflows fail with: “Response to preflight request doesn’t pass access control check: It does not have HTTP ok status.”
This blocks parquet-wasm and DuckDB-Wasm style readers which rely on HEAD + Range or non-safelisted Range patterns. ([GitHub](https://github.com/duckdb/duckdb-wasm/issues/1852))
## Expected behavior
OPTIONS to the final redirected host returns 200/204 (no redirect) with appropriate CORS headers. Preflight responses must be “ok” status. ([GitHub](https://github.com/whatwg/fetch/issues/1588))
GET with Range returns 206 Partial Content and includes CORS headers, plus exposes Content-Range, Accept-Ranges, and Content-Length so browser JS can consume them. ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Content-Range))
## Proposed CORS headers (public, anonymous files)
For responses from cas-bridge.xethub.hf.co (and any sibling Xet bridge hosts):
### Preflight (OPTIONS)
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET, HEAD, OPTIONS
Access-Control-Allow-Headers: Range, Content-Type (or echo Access-Control-Request-Headers)
Access-Control-Max-Age: 86400 (optional, reduces preflight spam)
### Actual (GET/HEAD, including 206)
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: Content-Range, Accept-Ranges, Content-Length
Ensure Accept-Ranges: bytes and Content-Range are present for range responses. ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Accept-Ranges))
### Notes on credentials (optional)
If any endpoint requires credentials, wildcard * cannot be used and the server must echo Origin and add Vary: Origin. ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Access-Control-Allow-Origin))
## Impact
This unblocks efficient browser analytics and visualization on HF-hosted datasets using Parquet row groups, DuckDB-Wasm, parquet-wasm, and similar tooling. DuckDB-Wasm documentation explicitly notes that remote data access requires correct CORS on the hosting site. ([DuckDB](https://duckdb.org/docs/stable/clients/wasm/extensions.html))
High-quality references worth linking in the issue thread
Hugging Face: redirect to cas-bridge.xethub.hf.co shown in Xet migration blog ([Hugging Face](https://huggingface.co/blog/migrating-the-hub-to-xet))
Fetch/CORS: preflight must be “ok” status (200/204) ([GitHub](https://github.com/whatwg/fetch/issues/1588))
Fetch/CORS: redirect + preflight is a known sharp edge ([GitHub](https://github.com/whatwg/fetch/issues/204))
MDN CORS guide: Range safelist caveat ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CORS))
MDN Range header: single-range is safelisted, multi-range may preflight ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Range))
MDN Expose-Headers: non-safelisted headers must be exposed ([MDN WebDocument](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Access-Control-Expose-Headers))
DuckDB-Wasm: remote HTTPFS requires correct CORS ([DuckDB](https://duckdb.org/docs/stable/clients/wasm/extensions.html))
DuckDB-Wasm issue: HEAD blocked by CORS breaks the pipeline ([GitHub](https://github.com/duckdb/duckdb-wasm/issues/1852))
pdf.js historical issues about Accept-Ranges/Content-Range exposure ([GitHub](https://github.com/mozilla/pdf.js/issues/3150))
## Summary
Your request is standard: browser Parquet needs byte ranges.
Redirect to cas-bridge.xethub.hf.co makes CORS enforcement happen on the Xet bridge host. ([Hugging Face](https://huggingface.co/blog/migrating-the-hub-to-xet))
Fix requires: OPTIONS returns 200/204 with CORS headers, and 206 responses include CORS + exposed headers. ([GitHub](https://github.com/whatwg/fetch/issues/1588))
Similar failures exist across pdf.js and DuckDB-Wasm ecosystems. ([GitHub](https://github.com/duckdb/duckdb-wasm/issues/1852))
### Motivation
I would like to be able to read subsets of large Parquet files using range requests using the parquet_wasm library on the front end. This is being used as part of a spatial data visualization project https://github.com/broadinstitute/celldega
### Your contribution
I would be happy to provide code to make front-end range requests as an example.
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7931/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7931/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7930
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7930/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7930/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7930/events
|
https://github.com/huggingface/datasets/pull/7930
| 3,777,628,848
|
PR_kwDODunzps67WYwc
| 7,930
|
Proposal: Protein 3D Structure Visualization for Dataset Viewer
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4",
"events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}",
"followers_url": "https://api.github.com/users/behroozazarkhalili/followers",
"following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}",
"gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/behroozazarkhalili",
"id": 80390531,
"login": "behroozazarkhalili",
"node_id": "MDQ6VXNlcjgwMzkwNTMx",
"organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs",
"received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events",
"repos_url": "https://api.github.com/users/behroozazarkhalili/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions",
"type": "User",
"url": "https://api.github.com/users/behroozazarkhalili",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"cc @georgia-hf - Following up on your question about protein visualization for the Dataset Viewer. This proposal recommends 3Dmol.js (~150KB gzipped) as a lightweight alternative to Mol* (~1.3MB gzipped).\n\nLooking forward to your feedback!",
"Exciting ! cc @cfahlgren1 @severo for the Viewer part\r\n\r\nFor the `datasets` part I'll leave my feedbacks in the PRs :)",
"I don't know the JS libraries, but indeed, the lighter the better, as we don't require advanced features.",
"From a quick look at the PDB and mmCIF PRs I noticed that the dataset has one row = one atom. However I humbly believe that such datasets would be more practical to use if one row = one structure. This way each row is independent, which is practical in ML to perform train/test splits or dataset shuffling.\r\n\r\nThis would also make it easier to add labels and metadata for each structure, similar to what we already for images. E.g. you could group them per folder named after a label, or you can have a metadata.parquet file to add custom metadata per structure.\r\n\r\nAnd this way in the Viewer it could show one 3D render per row.\r\n\r\nWhat do you think ?",
"@lhoestq @severo @georgia-hf I will be waiting for all your comments; then, I will start implementing the final plan. "
] | 2026-01-03T03:30:01
| 2026-01-09T18:33:10
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7930.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7930",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7930.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7930"
}
|
# Proposal: Protein 3D Structure Visualization for HuggingFace Dataset Viewer
## Executive Summary
This proposal outlines adding 3D protein structure visualization to the HuggingFace Dataset Viewer, enabling users to interactively view PDB and mmCIF molecular structures directly within the dataset preview interface.
---
## Data Type Support (Updated Architecture)
**Supported formats** (from recent PRs):
- **PDB** (PR #7926): `.pdb`, `.ent` extensions via `PdbFolder` builder
- **mmCIF** (PR #7925): `.cif`, `.mmcif` extensions via `MmcifFolder` builder
**New Implementation Pattern (One Row = One Structure)**:
Both PRs have been refactored to follow the **ImageFolder pattern**, where each row in the dataset contains one complete protein structure file. This is the recommended ML-friendly approach:
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("mmcif", data_dir="./structures")
>>> dataset[0]
{'structure': 'data_1ABC\n_entry.id 1ABC\n_atom_site...'} # Complete mmCIF content
>>> from datasets import load_dataset
>>> dataset = load_dataset("pdb", data_dir="./pdbs")
>>> dataset[0]
{'structure': 'HEADER PROTEIN 01-JAN-20 1ABC\nATOM...'} # Complete PDB content
```
**Key Components**:
- **ProteinStructure feature type**: New feature type supporting both PDB and mmCIF formats with lazy loading
- **PdbFolder builder** (PR #7926): Folder-based loader for PDB files with label and metadata support
- **MmcifFolder builder** (PR #7925): Folder-based loader for mmCIF files with label and metadata support
**What gets visualized**:
- 3D atomic coordinates (x, y, z)
- Chain structures
- Residue information
- Atom types and elements
- Secondary structure (helices, sheets)
**Not applicable** (1D sequence only):
- FASTA (PR #7923) - text sequences, no 3D coordinates
- FASTQ (PR #7924) - sequences with quality scores, no 3D coordinates
---
## Visualization Library Comparison
| Library | Bundle Size (minified) | Bundle Size (gzipped) | License | Pros | Cons |
|---------|------------------------|----------------------|---------|------|------|
| **3Dmol.js** | 512 KB | **~150 KB** | BSD-3 | Lightweight, easy integration, good docs | Fewer advanced features |
| **NGL Viewer** | 1.3 MB | ~350 KB | MIT | Excellent MMTF support, beautiful rendering | Moderate complexity |
| **Mol*** | 4.6 MB | ~1.3 MB | MIT | Industry standard, used by RCSB PDB, feature-rich | Heavy, complex |
| **PDBe Molstar** | 5.8 MB | ~1.6 MB | Apache 2.0 | EMBL-EBI maintained, simpler Mol* wrapper | Still very heavy |
*Bundle sizes verified by downloading actual distribution files from npm/CDN (January 2026)*
---
## Recommendation: 3Dmol.js
**Primary choice**: 3Dmol.js
**Rationale**:
1. **Bundle size**: ~150 KB gzipped - the lightest option by far, ideal for lazy loading
2. **Simple API**: Easy to integrate with React/Next.js
3. **BSD-3 License**: Compatible with HuggingFace licensing
4. **Active maintenance**: Regular updates, good community support
5. **Format support**: Native PDB and mmCIF parsing built-in
6. **Sufficient features**: Rotation, zoom, style switching (cartoon, stick, sphere)
**Why not Mol*?** As Georgia noted, Mol* is heavy (~1.3 MB gzipped). While it's the industry standard for RCSB PDB, it's overkill for a dataset preview where users just need to verify structure data looks correct.
**Alternative for power users**: If users need advanced features like density maps, ligand interactions, or sequence alignment overlay, consider PDBe Molstar as an optional "full viewer" mode.
---
## Summary
**Recommended approach**:
- Use **3Dmol.js** (~150 KB gzipped) with **lazy loading**
- Only loads when user views PDB/mmCIF datasets
- Simple integration, BSD-3 license, active community support
**Backend implementation** (Updated):
- PR #7925 (mmCIF): Uses **MmcifFolder** builder with **ProteinStructure** feature type
- PR #7926 (PDB): Uses **PdbFolder** builder with **ProteinStructure** feature type
- Both follow the **one-row-per-structure** pattern (like ImageFolder)
- Each row's `structure` column contains the complete file content ready for 3D rendering
---
## Next Steps
1. Get feedback on this proposal
2. Create proof-of-concept in a standalone demo if needed
3. Integrate into dataset-viewer once approach is approved
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7930/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7930/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7929
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7929/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7929/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7929/events
|
https://github.com/huggingface/datasets/pull/7929
| 3,776,098,655
|
PR_kwDODunzps67Rayd
| 7,929
|
Raise early for invalid `revision` in `load_dataset`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/52365471?v=4",
"events_url": "https://api.github.com/users/Scott-Simmons/events{/privacy}",
"followers_url": "https://api.github.com/users/Scott-Simmons/followers",
"following_url": "https://api.github.com/users/Scott-Simmons/following{/other_user}",
"gists_url": "https://api.github.com/users/Scott-Simmons/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Scott-Simmons",
"id": 52365471,
"login": "Scott-Simmons",
"node_id": "MDQ6VXNlcjUyMzY1NDcx",
"organizations_url": "https://api.github.com/users/Scott-Simmons/orgs",
"received_events_url": "https://api.github.com/users/Scott-Simmons/received_events",
"repos_url": "https://api.github.com/users/Scott-Simmons/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Scott-Simmons/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Scott-Simmons/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Scott-Simmons",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Passes\r\n```sh\r\npytest -k \"LoadTest and test_load_dataset_invalid_revision_with_cache\"\r\n```\r\n\r\nFails\r\n```sh\r\ngit checkout cc2399019a3a547ebc31ec68a1ff99abd4ec93ce\r\npytest -k \"LoadTest and test_load_dataset_invalid_revision_with_cache\"\r\n```\r\n\r\nRan `make test`, but failures look unrelated to the PR diff (same tests fail on `main` too)\r\n\r\n```sh\r\nFAILED tests/test_distributed.py::test_torch_distributed_run[False] - TypeError: Passing coroutines is forbidden...\r\nFAILED tests/test_distributed.py::test_torch_distributed_run[True] - TypeError: Passing coroutines is forbidden...\r\nFAILED tests/test_distributed.py::test_torch_distributed_run_streaming_with_num_workers[2-2] - TypeError: Passing coroutines is forbidden...\r\nFAILED tests/test_distributed.py::test_torch_distributed_run_streaming_with_num_workers[3-2] - TypeError: Passing coroutines is forbidden...\r\n= 4 failed, 3077 passed, 18 skipped, 491 warnings in 556.45s (0:09:16) =\r\nmake: *** [Makefile:20: test] Error 1\r\n```",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7929). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2026-01-02T10:40:49
| 2026-01-09T11:08:44
| 2026-01-09T11:08:43
|
CONTRIBUTOR
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7929.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7929",
"merged_at": "2026-01-09T11:08:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7929.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7929"
}
|
Solves https://github.com/huggingface/datasets/issues/7928
Raise early for invalid revisions
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7929/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7929/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7928
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7928/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7928/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7928/events
|
https://github.com/huggingface/datasets/issues/7928
| 3,775,842,185
|
I_kwDODunzps7hDseJ
| 7,928
|
`load_dataset` `revision` param not respected when fetching from cache
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/52365471?v=4",
"events_url": "https://api.github.com/users/Scott-Simmons/events{/privacy}",
"followers_url": "https://api.github.com/users/Scott-Simmons/followers",
"following_url": "https://api.github.com/users/Scott-Simmons/following{/other_user}",
"gists_url": "https://api.github.com/users/Scott-Simmons/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Scott-Simmons",
"id": 52365471,
"login": "Scott-Simmons",
"node_id": "MDQ6VXNlcjUyMzY1NDcx",
"organizations_url": "https://api.github.com/users/Scott-Simmons/orgs",
"received_events_url": "https://api.github.com/users/Scott-Simmons/received_events",
"repos_url": "https://api.github.com/users/Scott-Simmons/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Scott-Simmons/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Scott-Simmons/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Scott-Simmons",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"This might be better placed as a feature request not a bug, since the logging `Using the latest cached version of the dataset since sentientfutures/ahb couldn't be found on the Hugging Face Hub` is clear.",
"https://github.com/huggingface/datasets/pull/7929 This only solves the case of invalid revisions. Fetching a specific revision from the cache would be more work but I think this is a good start and solves issues like https://github.com/UKGovernmentBEIS/inspect_evals/pull/834#issuecomment-3704689637"
] | 2026-01-02T08:20:47
| 2026-01-07T07:50:40
| null |
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
`datasets.load_dataset` `revision` semantics are a bit inconsistent when the dataset is not found on the huggingface hub. When fetching the latest cached version of the dataset, the `revision` argument is ignored, so long as any cached versions of the dataset already exist in the HF cache.
### Steps to reproduce the bug
```python
import datasets
datasets.load_dataset(
"sentientfutures/ahb",
"dimensions",
split="train",
revision="main"
)
# would expect some error to raise here
datasets.load_dataset(
"sentientfutures/ahb",
"dimensions",
split="train",
revision="invalid_revision"
)
```
### Expected behavior
On the second call to `datasets.load_dataset` in the 'steps to reproduce the bug' example, expect something like:
```sh
raise DatasetNotFoundError(
datasets.exceptions.DatasetNotFoundError: Revision 'invalid_revision' doesn't exist for dataset 'sentientfutures/ahb' on the Hub.
```
### Environment info
- `datasets` version: 4.4.1
- Platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.37
- Python version: 3.12.12
- `huggingface_hub` version: 0.36.0
- PyArrow version: 22.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2025.9.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7928/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7928/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7927
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7927/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7927/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7927/events
|
https://github.com/huggingface/datasets/issues/7927
| 3,775,302,438
|
I_kwDODunzps7hBosm
| 7,927
|
Using Stateful Dataloader with Split Dataset By Node and DCP for DDP
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25208228?v=4",
"events_url": "https://api.github.com/users/conceptofmind/events{/privacy}",
"followers_url": "https://api.github.com/users/conceptofmind/followers",
"following_url": "https://api.github.com/users/conceptofmind/following{/other_user}",
"gists_url": "https://api.github.com/users/conceptofmind/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/conceptofmind",
"id": 25208228,
"login": "conceptofmind",
"node_id": "MDQ6VXNlcjI1MjA4MjI4",
"organizations_url": "https://api.github.com/users/conceptofmind/orgs",
"received_events_url": "https://api.github.com/users/conceptofmind/received_events",
"repos_url": "https://api.github.com/users/conceptofmind/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/conceptofmind/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/conceptofmind/subscriptions",
"type": "User",
"url": "https://api.github.com/users/conceptofmind",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Does it need to be pickled?\n\n```python\n def load_state_dict(self, state_dict):\n hf_state = pickle.loads(state_dict[\"data\"])\n self.train_dataset.load_state_dict(hf_state)\n\n def state_dict(self):\n return {\"data\": pickle.dumps(self.train_dataset.state_dict())}\n```",
"Pickling seems to have resolved the issue but it is not clear at all to me why this is necessary"
] | 2026-01-01T22:27:07
| 2026-01-02T02:48:21
| null |
NONE
| null | null | null | null |
### Describe the bug
I am trying to determine how to save and load the Stateful Dataloader State with DCP and Split Dataset by Node for DDP.
Currently, I am running into the issue where I am receiving a slow resume.
```
Neither dataset nor iter(dataset) defines state_dict/load_state_dict so we are naively fast-forwarding your dataset by 5000 steps. For more efficient resumes, please implement `state_dict` and `load_state_dict` in your IterableDataset and/or iterator.
```
### Steps to reproduce the bug
Say we have a streaming dataset:
```python
class StreamingDataset(IterableDataset):
def __init__(
self,
path: str,
tokenizer: AutoTokenizer,
name: Optional[str] = None,
split: str = "train",
max_length: int = 2048,
ddp_rank: int = 0,
ddp_world_size: int = 1,
):
dataset = load_dataset(path, name, split=split, streaming=True)
self.train_dataset = split_dataset_by_node(
dataset=dataset, rank=ddp_rank, world_size=ddp_world_size
)
self.tokenizer = tokenizer
self.max_length = max_length
def __iter__(self):
for sample in iter(self.train_dataset):
tokenized = self.tokenizer(
sample["text"],
padding="max_length",
truncation=True,
max_length=self.max_length,
return_special_tokens_mask=True,
)
yield tokenized
```
We load that dataset into the Stateful Dataloader:
```python
trainloader = StatefulDataLoader(
dataset=train_dataset,
batch_size=args.batch_size,
collate_fn=data_collator,
)
```
We then have code for checkpointing and resuming the state using DCP:
```python
import os
from typing import Optional
import torch
import torch.distributed as dist
import torch.distributed.checkpoint as dcp
from torch.distributed.checkpoint.format_utils import dcp_to_torch_save
from torch.distributed.checkpoint.state_dict import get_state_dict, set_state_dict
from blitzbert.utils import print_rank_0
class Checkpoint:
def __init__(
self,
model: torch.nn.Module,
optimizer: torch.optim.Optimizer,
trainloader,
step: Optional[int] = None,
epoch: Optional[int] = None,
):
self.model = model
self.optimizer = optimizer
self.trainloader = trainloader
self.step = step
self.epoch = epoch
def get_state_dict(self) -> dict:
model_state_dict, optimizer_state_dict = get_state_dict(
self.model, self.optimizer
)
return {
"model": model_state_dict,
"optim": optimizer_state_dict,
"trainloader": self.trainloader.state_dict(),
"step": self.step,
"epoch": self.epoch,
}
def save_checkpoint(
args,
model,
optimizer,
trainloader,
step: Optional[int] = None,
epoch: Optional[int] = None,
final_checkpoint: bool = False,
):
checkpointer = Checkpoint(
model=model,
optimizer=optimizer,
trainloader=trainloader,
step=step,
epoch=epoch,
)
state_dict = checkpointer.get_state_dict()
if final_checkpoint:
print_rank_0("Saving final model")
save_path = os.path.join(args.checkpoint_dir, "final_model")
dcp.save(state_dict, checkpoint_id=save_path)
dist.barrier()
single_file_path = os.path.join(args.checkpoint_dir, "final_checkpoint.pth")
dcp_to_torch_save(save_path, single_file_path)
else:
if step % args.checkpointing_steps == 0 and step != 0:
print_rank_0(f"Saving model at step: {step}")
save_path = os.path.join(args.checkpoint_dir, f"epoch_{epoch}_step_{step}")
dcp.save(state_dict, checkpoint_id=save_path)
dist.barrier()
def load_checkpoint(args, model, optimizer, trainloader):
if not args.resume_from_checkpoint:
return 0, 0
checkpoint_path = args.resume_from_checkpoint
print_rank_0(f"Resumed from checkpoint: {checkpoint_path}")
checkpointer = Checkpoint(
model=model,
optimizer=optimizer,
trainloader=trainloader,
)
state_dict = checkpointer.get_state_dict()
dcp.load(
state_dict=state_dict,
checkpoint_id=checkpoint_path,
)
set_state_dict(
model,
optimizer,
model_state_dict=state_dict["model"],
optim_state_dict=state_dict["optim"],
)
trainloader.load_state_dict(state_dict["trainloader"])
step = state_dict["step"]
epoch = state_dict["epoch"]
return step, epoch
```
and then loading the checkpoint:
```python
completed_steps, current_epoch = load_checkpoint(
args=args, model=model, optimizer=optimizer, trainloader=trainloader
)
```
### Expected behavior
If I implement what the warning says:
```python
def state_dict(self):
return self.train_dataset.state_dict()
def load_state_dict(self, state):
self.train_dataset.load_state_dict(state)
```
I then get:
```
[rank0]: raise RuntimeError(f"Missing key in checkpoint state_dict: {fqn}.")
[rank0]: RuntimeError: Missing key in checkpoint state_dict: trainloader.dataset_state.examples_iterable.examples_iterable.previous_state.
```
How exactly should one be saving and resuming the Stateful Dataloader with Hugging Face datasets?
### Environment info
"datasets>=4.4.1",
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7927/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7927/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7926
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7926/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7926/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7926/events
|
https://github.com/huggingface/datasets/pull/7926
| 3,773,696,472
|
PR_kwDODunzps67Jxxz
| 7,926
|
Add lightweight PDB (Protein Data Bank) file support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4",
"events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}",
"followers_url": "https://api.github.com/users/behroozazarkhalili/followers",
"following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}",
"gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/behroozazarkhalili",
"id": 80390531,
"login": "behroozazarkhalili",
"node_id": "MDQ6VXNlcjgwMzkwNTMx",
"organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs",
"received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events",
"repos_url": "https://api.github.com/users/behroozazarkhalili/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions",
"type": "User",
"url": "https://api.github.com/users/behroozazarkhalili",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-31T21:01:04
| 2026-01-09T19:22:17
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7926.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7926",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7926.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7926"
}
|
## Summary
This PR adds support for loading PDB (Protein Data Bank) files with `load_dataset()`, following the **ImageFolder pattern** where **one row = one structure**.
Based on feedback from @lhoestq in #7930, this approach makes datasets more practical for ML workflows:
- Each row is independent, enabling train/test splits and shuffling
- Easy to add labels (folder-based) and metadata (metadata.jsonl)
- Compatible with Dataset Viewer (one 3D render per row)
### Architecture
Uses `FolderBasedBuilder` pattern (like `ImageFolder`, `AudioFolder`):
```python
class PdbFolder(FolderBasedBuilder):
BASE_FEATURE = ProteinStructure
BASE_COLUMN_NAME = "structure"
EXTENSIONS = [".pdb", ".ent"]
```
### New `ProteinStructure` Feature Type
```python
# Arrow schema for lazy loading
pa.struct({"bytes": pa.binary(), "path": pa.string()})
# Decoded: returns structure file content as string
dataset = load_dataset("pdb", data_dir="structures/")
print(dataset[0]["structure"]) # Full PDB file content
```
### Supported Extensions
`.pdb`, `.ent`
### Usage
```python
from datasets import load_dataset
# Load from directory
dataset = load_dataset("pdb", data_dir="protein_structures/")
# Load with folder-based labels
# structures/
# enzymes/
# 1abc.pdb
# receptors/
# 2def.pdb
dataset = load_dataset("pdb", data_dir="structures/")
print(dataset[0]) # {"structure": "HEADER...", "label": "enzymes"}
# Load with metadata
# structures/
# 1abc.pdb
# metadata.jsonl # {"file_name": "1abc.pdb", "resolution": 2.5}
dataset = load_dataset("pdb", data_dir="structures/")
print(dataset[0]) # {"structure": "HEADER...", "resolution": 2.5}
# Drop labels or metadata
dataset = load_dataset("pdb", data_dir="structures/", drop_labels=True)
dataset = load_dataset("pdb", data_dir="structures/", drop_metadata=True)
```
### Test Results
All 28 PDB tests + 15 ProteinStructure feature tests pass.
### Related PRs
- #7925 - mmCIF support (same pattern)
- #7930 - Protein 3D visualization proposal
cc @lhoestq @georgia-hf
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7926/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7926/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7925
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7925/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7925/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7925/events
|
https://github.com/huggingface/datasets/pull/7925
| 3,773,577,850
|
PR_kwDODunzps67JW3g
| 7,925
|
feat: Add mmCIF file support for macromolecular structures
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4",
"events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}",
"followers_url": "https://api.github.com/users/behroozazarkhalili/followers",
"following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}",
"gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/behroozazarkhalili",
"id": 80390531,
"login": "behroozazarkhalili",
"node_id": "MDQ6VXNlcjgwMzkwNTMx",
"organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs",
"received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events",
"repos_url": "https://api.github.com/users/behroozazarkhalili/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions",
"type": "User",
"url": "https://api.github.com/users/behroozazarkhalili",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-31T20:11:32
| 2026-01-09T19:22:55
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7925.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7925",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7925.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7925"
}
|
## Summary
This PR adds support for loading mmCIF (macromolecular Crystallographic Information File) files with `load_dataset()`, following the **ImageFolder pattern** where **one row = one structure**.
Based on feedback from @lhoestq in #7930, this approach makes datasets more practical for ML workflows:
- Each row is independent, enabling train/test splits and shuffling
- Easy to add labels (folder-based) and metadata (metadata.jsonl)
- Compatible with Dataset Viewer (one 3D render per row)
### Architecture
Uses `FolderBasedBuilder` pattern (like `ImageFolder`, `AudioFolder`):
```python
class MmcifFolder(FolderBasedBuilder):
BASE_FEATURE = ProteinStructure
BASE_COLUMN_NAME = "structure"
EXTENSIONS = [".cif", ".mmcif"]
```
### New `ProteinStructure` Feature Type
```python
# Arrow schema for lazy loading
pa.struct({"bytes": pa.binary(), "path": pa.string()})
# Decoded: returns structure file content as string
dataset = load_dataset("mmcif", data_dir="structures/")
print(dataset[0]["structure"]) # Full mmCIF file content
```
### Supported Extensions
`.cif`, `.mmcif`
### Usage
```python
from datasets import load_dataset
# Load from directory
dataset = load_dataset("mmcif", data_dir="protein_structures/")
# Load with folder-based labels
# structures/
# enzymes/
# 1abc.cif
# receptors/
# 2def.cif
dataset = load_dataset("mmcif", data_dir="structures/")
print(dataset[0]) # {"structure": "data_...", "label": "enzymes"}
# Load with metadata
# structures/
# 1abc.cif
# metadata.jsonl # {"file_name": "1abc.cif", "resolution": 2.5}
dataset = load_dataset("mmcif", data_dir="structures/")
print(dataset[0]) # {"structure": "data_...", "resolution": 2.5}
# Drop labels or metadata
dataset = load_dataset("mmcif", data_dir="structures/", drop_labels=True)
dataset = load_dataset("mmcif", data_dir="structures/", drop_metadata=True)
```
### Test Results
All 24 mmCIF tests + 15 ProteinStructure feature tests pass.
### Related PRs
- #7926 - PDB support (same pattern)
- #7930 - Protein 3D visualization proposal
### References
- mmCIF specification: https://mmcif.wwpdb.org/
- PDB archive: https://www.rcsb.org/
cc @lhoestq @georgia-hf
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7925/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7925/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7924
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7924/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7924/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7924/events
|
https://github.com/huggingface/datasets/pull/7924
| 3,773,509,771
|
PR_kwDODunzps67JHNF
| 7,924
|
Add lightweight FASTQ file format support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4",
"events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}",
"followers_url": "https://api.github.com/users/behroozazarkhalili/followers",
"following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}",
"gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/behroozazarkhalili",
"id": 80390531,
"login": "behroozazarkhalili",
"node_id": "MDQ6VXNlcjgwMzkwNTMx",
"organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs",
"received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events",
"repos_url": "https://api.github.com/users/behroozazarkhalili/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions",
"type": "User",
"url": "https://api.github.com/users/behroozazarkhalili",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-31T19:46:42
| 2026-01-10T01:24:56
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7924.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7924",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7924.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7924"
}
|
## Summary
This PR adds support for loading FASTQ files directly with `load_dataset()`.
FASTQ is an extension of FASTA that includes quality scores for each base, widely used for storing output from high-throughput sequencing instruments.
### Key Features
- **Zero external dependencies** - Pure Python parser based on [readfq.py](https://github.com/lh3/readfq) by Heng Li
- **Quality score support** - Preserves per-base quality scores as ASCII-encoded strings
- **Streaming support** - Generator-based parsing for memory efficiency with large NGS files
- **Compression support** - Automatic detection of gzip, bzip2, and xz compressed files
- **Large sequence support** - Uses `large_string` for both sequence and quality columns
- **Parquet-safe batching** - Dual-threshold batching (batch_size + max_batch_bytes) prevents page size errors
### Columns
| Column | Type | Description |
|--------|------|-------------|
| `id` | string | Sequence identifier (first word after `@`) |
| `description` | string | Full description line (everything after id) |
| `sequence` | large_string | The nucleotide sequence |
| `quality` | large_string | ASCII-encoded quality scores (Phred+33 by default) |
### Supported Extensions
`.fq`, `.fastq` (and compressed variants: `.fq.gz`, `.fastq.gz`, `.fq.bz2`, `.fq.xz`)
### Usage
```python
from datasets import load_dataset
# Load FASTQ file
dataset = load_dataset("fastq", data_files="reads.fastq")
# Load gzipped file
dataset = load_dataset("fastq", data_files="reads.fq.gz")
# Filter columns
dataset = load_dataset("fastq", data_files="reads.fq", columns=["sequence", "quality"])
```
### Quality Score Format
Quality scores use Sanger/Illumina 1.8+ encoding (Phred+33):
- ASCII character `\!` (33) = quality 0
- ASCII character `I` (73) = quality 40
### Testing
- 22 comprehensive tests covering basic loading, multi-line sequences, compression, batching, schema types, and edge cases
- All tests passing
- Linting clean
### References
- Follows pattern established in #7923 (FASTA support)
- Parser based on: https://github.com/lh3/readfq
- Addresses feedback from #7851
cc: @georgia-hf
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7924/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7924/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7923
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7923/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7923/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7923/events
|
https://github.com/huggingface/datasets/pull/7923
| 3,773,472,998
|
PR_kwDODunzps67I-y3
| 7,923
|
feat(fasta): add lightweight FASTA file format support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/80390531?v=4",
"events_url": "https://api.github.com/users/behroozazarkhalili/events{/privacy}",
"followers_url": "https://api.github.com/users/behroozazarkhalili/followers",
"following_url": "https://api.github.com/users/behroozazarkhalili/following{/other_user}",
"gists_url": "https://api.github.com/users/behroozazarkhalili/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/behroozazarkhalili",
"id": 80390531,
"login": "behroozazarkhalili",
"node_id": "MDQ6VXNlcjgwMzkwNTMx",
"organizations_url": "https://api.github.com/users/behroozazarkhalili/orgs",
"received_events_url": "https://api.github.com/users/behroozazarkhalili/received_events",
"repos_url": "https://api.github.com/users/behroozazarkhalili/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/behroozazarkhalili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/behroozazarkhalili/subscriptions",
"type": "User",
"url": "https://api.github.com/users/behroozazarkhalili",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-31T19:33:00
| 2026-01-10T01:13:05
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7923.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7923",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7923.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7923"
}
|
## Summary
This PR adds support for loading FASTA files directly with `load_dataset()`, addressing feedback from #7851.
FASTA is a text-based format for representing nucleotide sequences (DNA/RNA) or peptide sequences (proteins), widely used in bioinformatics.
## Key Features
- **Zero external dependencies** - Uses a lightweight pure Python parser based on [readfq.py](https://github.com/lh3/readfq) by Heng Li
- **Streaming support** - Generator-based parsing for memory efficiency with large genomic files
- **Compression support** - Automatic detection and handling of gzip, bzip2, and xz compressed files via magic bytes
- **Large sequence support** - Uses `large_string` Arrow type to handle viral genomes and long sequences (fixes UTF-8 overflow)
- **Adaptive batching** - `max_batch_bytes` parameter (default 256MB) prevents Parquet page size errors with very large sequences
## Technical Decisions (Addressing #7851 Feedback)
| Concern | Solution |
|---------|----------|
| Long sequences → UTF-8 overflow (@apcamargo, @UriNeri) | Uses `pa.large_string()` for sequence column |
| BioPython is overkill (@apcamargo) | Pure Python parser based on Heng Li's readfq.py |
| Parquet page size limit i32::MAX (@UriNeri) | Adaptive dual-threshold batching with `max_batch_bytes` |
## Columns
| Column | Type | Description |
|--------|------|-------------|
| `id` | string | Sequence identifier (first word after `>`) |
| `description` | string | Full description line (everything after id) |
| `sequence` | large_string | The biological sequence (DNA/RNA/protein) |
## Supported Extensions
`.fa`, `.fasta`, `.fna`, `.ffn`, `.faa`, `.frn` (and compressed variants)
## Usage
```python
from datasets import load_dataset
# Load FASTA file
dataset = load_dataset("fasta", data_files="sequences.fasta")
# Load with column filtering
dataset = load_dataset("fasta", data_files="sequences.fa", columns=["id", "sequence"])
# Load gzipped file
dataset = load_dataset("fasta", data_files="sequences.fa.gz")
# Configure batching for very large genomes
dataset = load_dataset("fasta", data_files="genome.fasta", max_batch_bytes=128*1024*1024)
```
## Test Plan
- [x] Basic FASTA loading (3 sequences, multi-line)
- [x] Multiple extension support (.fa, .fasta, .fna, .ffn, .faa, .frn)
- [x] Compression formats (gzip, bz2, xz)
- [x] Long sequences with `large_string` type
- [x] Column filtering
- [x] Batch size configuration
- [x] Byte-based batching (`max_batch_bytes`)
- [x] Large genome handling (simulated 50KB sequences)
- [x] Empty description handling
- [x] Multiple files loading
- [x] Custom feature casting
All 22 tests passing.
cc: @georgia-hf
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7923/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7923/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7922
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7922/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7922/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7922/events
|
https://github.com/huggingface/datasets/issues/7922
| 3,772,247,021
|
I_kwDODunzps7g1-vt
| 7,922
|
Support Apache TsFile Datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7240743?v=4",
"events_url": "https://api.github.com/users/qiaojialin/events{/privacy}",
"followers_url": "https://api.github.com/users/qiaojialin/followers",
"following_url": "https://api.github.com/users/qiaojialin/following{/other_user}",
"gists_url": "https://api.github.com/users/qiaojialin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qiaojialin",
"id": 7240743,
"login": "qiaojialin",
"node_id": "MDQ6VXNlcjcyNDA3NDM=",
"organizations_url": "https://api.github.com/users/qiaojialin/orgs",
"received_events_url": "https://api.github.com/users/qiaojialin/received_events",
"repos_url": "https://api.github.com/users/qiaojialin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qiaojialin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qiaojialin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qiaojialin",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"A large quantity of industrial timeseries data has been stored as TsFile, and I have been constantly hearing about AI fellows complaining about the lack of data or the insufficiency of data quality.\n\nI like the ambition that uses TsFile as the bridge between AI research and industrial analysis requirements. This may help both sides improve their works with high-quality data and realtime data access.",
"It will be so convenient to have such a method to directly load tsfile into memory for further analysis.",
"Looking forward to see the tsfile become the part of the AI eco-systems.",
"Looking forward to the support for TsFile format!",
"Hey folks! I’ve added TsFile support by following the existing HDF5/Parquet patterns.\n\nThis includes:\n\nA TsFile builder with schema inference from file metadata\n\nTime-range filtering and column selection\n\nMemory-efficient reading using the tsfile library’s iterator API\n\n11 tests, all passing ✅\n\nI’ll be opening a PR shortly, would love any suggestions or feedback you might have!"
] | 2025-12-31T08:07:51
| 2026-01-05T08:23:21
| null |
NONE
| null | null | null | null |
### Feature request
I would love to use Hugging Face datasets library to directly load datasets composed of .tsfile files, for example:
`ds = load_dataset("username/dataset-with-tsfile-files")`
This feature would allow researchers working on time-series tasks to seamlessly integrate datasets stored in the Apache TsFile format into the Hugging Face ecosystem.
### Motivation
[Apache TsFile](https://tsfile.apache.org/) is a mature Apache project and a dedicated file format designed for efficient time-series data storage and retrieval. The repository is [here](https://github.com/apache/tsfile).
It has been widely adopted in the IoT community and serves as the underlying storage format for projects like [Apache IoTDB](https://iotdb.apache.org/).
Apache TsFile has the following advantages in the time-series area:
- Time-series native schema. Time-series data is organized by device and sensor IDs.
- A complete multi-language API (Python, Java, C++, C) for reading and writing tsfile.
- Superior write throughput and query efficiency.
- High compression ratio through per-series encoding and compression schemes.
- Efficient dataset transformation. ETL-free file compaction and efficient random access to time-series chunks, enabling faster data loading and lower query latency.
These properties make TsFile highly suitable for time-series model training, especially where time-series random access and efficient I/O are critical.
More details can be referred from this paper “[Apache TsFile: An IoT-native Time Series File Format (VLDB 2024)](https://www.vldb.org/pvldb/vol17/p4064-song.pdf)”.
Integrating TsFile support into datasets will benefit the broader machine learning community working on tasks such as forecasting and anomaly detection.
### Your contribution
As a member of the TsFile community, I recently initiated a [proposal ](https://lists.apache.org/thread/119vc9nh03dz4583cx9fwt83fp8v68vy)to integrate TsFile with Huggingface, which has received enthusiastic responses from the community.
We are willing to do the following contributions:
- Implement and contribute the PR that adds TsFile dataset support to Hugging Face datasets.
- Provide long-term maintenance for this integration.
- Any other needs for TsFile to support large-scale time-series datasets.
We are excited to contribute and continuously participate in the future evolution of TsFile and datasets to better support time-series data workload.
| null |
{
"+1": 6,
"-1": 0,
"confused": 0,
"eyes": 4,
"heart": 4,
"hooray": 4,
"laugh": 0,
"rocket": 6,
"total_count": 24,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7922/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7922/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7921
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7921/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7921/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7921/events
|
https://github.com/huggingface/datasets/pull/7921
| 3,766,879,197
|
PR_kwDODunzps66zE_q
| 7,921
|
Add beginner-friendly quick installation verification tip in README
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/237550974?v=4",
"events_url": "https://api.github.com/users/ashupaul2005-byte/events{/privacy}",
"followers_url": "https://api.github.com/users/ashupaul2005-byte/followers",
"following_url": "https://api.github.com/users/ashupaul2005-byte/following{/other_user}",
"gists_url": "https://api.github.com/users/ashupaul2005-byte/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ashupaul2005-byte",
"id": 237550974,
"login": "ashupaul2005-byte",
"node_id": "U_kgDODii9fg",
"organizations_url": "https://api.github.com/users/ashupaul2005-byte/orgs",
"received_events_url": "https://api.github.com/users/ashupaul2005-byte/received_events",
"repos_url": "https://api.github.com/users/ashupaul2005-byte/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ashupaul2005-byte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashupaul2005-byte/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ashupaul2005-byte",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-29T09:22:27
| 2025-12-29T09:22:27
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7921.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7921",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7921.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7921"
}
|
This PR adds a small beginner-friendly tip to help users quickly verify whether 🤗 Datasets is installed correctly by loading a simple dataset.
This improves onboarding experience for first-time users and reduces confusion for beginners.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7921/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7921/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7920
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7920/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7920/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7920/events
|
https://github.com/huggingface/datasets/pull/7920
| 3,766,070,566
|
PR_kwDODunzps66wgLx
| 7,920
|
Add progress_format support for machine-readable progress output
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/563412?v=4",
"events_url": "https://api.github.com/users/podarok/events{/privacy}",
"followers_url": "https://api.github.com/users/podarok/followers",
"following_url": "https://api.github.com/users/podarok/following{/other_user}",
"gists_url": "https://api.github.com/users/podarok/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/podarok",
"id": 563412,
"login": "podarok",
"node_id": "MDQ6VXNlcjU2MzQxMg==",
"organizations_url": "https://api.github.com/users/podarok/orgs",
"received_events_url": "https://api.github.com/users/podarok/received_events",
"repos_url": "https://api.github.com/users/podarok/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/podarok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/podarok/subscriptions",
"type": "User",
"url": "https://api.github.com/users/podarok",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-28T22:35:24
| 2025-12-28T22:35:24
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7920.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7920",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7920.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7920"
}
|
## Summary
Adds support to , enabling machine-readable JSON progress output similar to [huggingface/tokenizers#1921](https://github.com/huggingface/tokenizers/pull/1921).
## Motivation
When using `datasets` in automated pipelines or UI applications, it's useful to emit machine-readable progress instead of ANSI progress bars. This PR adds the same `progress_format` option that was implemented in tokenizers.
## Changes
### New Functions
- `set_progress_format(format: str)`: Set global progress format
- `get_progress_format() -> str`: Get current progress format
### Supported Formats
1. **"tqdm"** (default): Interactive progress bars
2. **"json"**: Machine-readable JSON lines to stderr
3. **"silent"**: No output
### JSON Format
When `progress_format="json"`, emits JSON every 5% progress change or completion:
```json
{"stage":"Processing","current":50,"total":100,"percent":50.0}
```
## Usage Example
```python
from datasets import load_dataset
from datasets.utils import set_progress_format
# Enable JSON output
set_progress_format("json")
# Progress will now be emitted as JSON lines
dataset = load_dataset("Goader/kobza", split="train", streaming=True)
for sample in dataset:
process(sample)
```
## Implementation Details
- Suppresses visual output using `io.StringIO()` when format is "json"
- Keeps progress tracking active (unlike `disable=True`)
- Emits JSON to stderr every 5% progress change
- Exports new functions from `datasets.utils`
## Cross-Reference
This implementation mirrors the approach from:
- [huggingface/tokenizers#1921](https://github.com/huggingface/tokenizers/pull/1921)
## Testing
Tested with:
```python
from datasets.utils import set_progress_format, tqdm
set_progress_format('json')
for i in tqdm(range(100), desc='Test'):
process(i)
# Outputs: {"stage":"Test","current":10,"total":100,"percent":10.0}
```
## Checklist
- [x] New functions added to `datasets.utils.tqdm`
- [x] Functions exported from `datasets.utils.__init__`
- [x] JSON format emits to stderr
- [x] Visual output suppressed when format="json"
- [x] Progress tracking remains active
- [x] Cross-referenced with tokenizers#1921
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7920/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7920/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7919
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7919/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7919/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7919/events
|
https://github.com/huggingface/datasets/pull/7919
| 3,765,768,457
|
PR_kwDODunzps66vmQC
| 7,919
|
Fix load_from_disk progress bar with redirected stdout
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/118056245?v=4",
"events_url": "https://api.github.com/users/omarfarhoud/events{/privacy}",
"followers_url": "https://api.github.com/users/omarfarhoud/followers",
"following_url": "https://api.github.com/users/omarfarhoud/following{/other_user}",
"gists_url": "https://api.github.com/users/omarfarhoud/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omarfarhoud",
"id": 118056245,
"login": "omarfarhoud",
"node_id": "U_kgDOBwllNQ",
"organizations_url": "https://api.github.com/users/omarfarhoud/orgs",
"received_events_url": "https://api.github.com/users/omarfarhoud/received_events",
"repos_url": "https://api.github.com/users/omarfarhoud/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omarfarhoud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarfarhoud/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omarfarhoud",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"this seems to contradict the comment that says \r\n\r\n> set `disable=None` rather than `disable=False` by default to disable progress bar when no TTY attached\r\n\r\nI believe the right approach is to do the same as in https://github.com/huggingface/huggingface_hub/pull/2698",
"> this seems to contradict the comment that says\r\n> \r\n> > set `disable=None` rather than `disable=False` by default to disable progress bar when no TTY attached\r\n> \r\n> I believe the right approach is to do the same as in [huggingface/huggingface_hub#2698](https://github.com/huggingface/huggingface_hub/pull/2698)\r\n\r\nUpdated to check TQDM_POSITION=-1 to force-enable progress bars in cloud environments, \r\nfollowing the same pattern as huggingface_hub#2698.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7919). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Moved the TQDM_POSITION check to the tqdm class in utils/tqdm.py so all progress bars \r\nin the codebase have consistent behavior. Thanks for the suggestion!",
"@lhoestq thanks again for the suggestion. I’ve applied it and everything should now be consistent across all tqdm usage. Happy to adjust anything else if needed."
] | 2025-12-28T15:39:31
| 2026-01-14T22:49:06
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7919.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7919",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7919.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7919"
}
|
Fixes #7918
## Problem
When using `load_from_disk()` with `contextlib.redirect_stdout()`, the progress bar was not showing even for datasets with >16 files.
## Root Cause
The `disable` parameter was set to `None` which triggers TTY auto-detection. This fails when stdout is redirected, causing the progress bar to be hidden.
## Solution
Changed `disable=len(state["_data_files"]) <= 16 or None` to `disable=len(state["_data_files"]) <= 16` to force the progress bar to show for datasets with >16 files, regardless of stdout redirection.
## Testing
Verified that progress bars now appear correctly both with and without stdout redirection for datasets with >16 shards.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7919/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7919/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7918
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7918/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7918/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7918/events
|
https://github.com/huggingface/datasets/issues/7918
| 3,765,489,462
|
I_kwDODunzps7gcM82
| 7,918
|
datasets.load_from_disk doesn't show progress bar
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/60286968?v=4",
"events_url": "https://api.github.com/users/Tommigun1980/events{/privacy}",
"followers_url": "https://api.github.com/users/Tommigun1980/followers",
"following_url": "https://api.github.com/users/Tommigun1980/following{/other_user}",
"gists_url": "https://api.github.com/users/Tommigun1980/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Tommigun1980",
"id": 60286968,
"login": "Tommigun1980",
"node_id": "MDQ6VXNlcjYwMjg2OTY4",
"organizations_url": "https://api.github.com/users/Tommigun1980/orgs",
"received_events_url": "https://api.github.com/users/Tommigun1980/received_events",
"repos_url": "https://api.github.com/users/Tommigun1980/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Tommigun1980/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tommigun1980/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Tommigun1980",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"#self-assign"
] | 2025-12-28T09:14:41
| 2025-12-28T15:07:01
| null |
NONE
| null | null | null | null |
### Describe the bug
This is the inverse of the bug at [https://github.com/huggingface/datasets/issues/7030](https://github.com/huggingface/datasets/issues/7030), i.e. that `datasets.load_from_disk(path)` displays no progress bar. My dataset has > 16 files in it.
I am redirecting stdout as I capture the log, could this have something to do with it? All other progress bars work fine though except for HF dataset progress bars.
### Steps to reproduce the bug
```py
with contextlib.redirect_stdout(log_file), contextlib.redirect_stderr(log_file):
datasets.load_from_disk(path)
```
### Expected behavior
The progress bar should show when loading a dataset.
### Environment info
Python 3.13.9
Datasets 4.4.1
macOS Tahoe 26.2
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7918/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7918/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7917
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7917/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7917/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7917/events
|
https://github.com/huggingface/datasets/issues/7917
| 3,764,913,807
|
I_kwDODunzps7gaAaP
| 7,917
|
IterableDataset supports automatic sharding
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/61858900?v=4",
"events_url": "https://api.github.com/users/howitry/events{/privacy}",
"followers_url": "https://api.github.com/users/howitry/followers",
"following_url": "https://api.github.com/users/howitry/following{/other_user}",
"gists_url": "https://api.github.com/users/howitry/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/howitry",
"id": 61858900,
"login": "howitry",
"node_id": "MDQ6VXNlcjYxODU4OTAw",
"organizations_url": "https://api.github.com/users/howitry/orgs",
"received_events_url": "https://api.github.com/users/howitry/received_events",
"repos_url": "https://api.github.com/users/howitry/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/howitry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/howitry/subscriptions",
"type": "User",
"url": "https://api.github.com/users/howitry",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"You can already use `.shard()` instead like this:\n\n```python\ndataset = dataset.shard(index=rank, num_shards=world_size)\n```\n\nnote that it requires that `dataset.num_shards >= world_size`, and that it may result in nodes having the same number of shards +- 1",
"> You can already use `.shard()` instead like this:\n> \n> dataset = dataset.shard(index=rank, num_shards=world_size)\n> note that it requires that `dataset.num_shards >= world_size`, and that it may result in nodes having the same number of shards +- 1\n\nThis means I have to ensure that the initial num_shards is greater than the number of GPUs I use each time, which seems inflexible. Is there a way to dynamically divide the data into multiple shards based on the number of GPUs used each time? For example:\n```\ndataset = load_dataset(*, stream=True) # dataset.num_shards()=1\nnum_shards=world_size*dataloader_num_workers\ndataset = dataset.dynamically_shard(num_shards=num_shards, num_samples=num_samples) #We may need to know the total number of samples (num_samples) in advance.\n```\n\n",
"> Is there a way to dynamically divide the data into multiple shards based on the number of GPUs used each time?\n\nNo it's not possible without either\n\n1. doing data skipping, which degrades the data loading performance significantly (every node has to download the same data and skip most samples)\n2. or divide the original files further, which requires additional logic for every file format\n\nI would be interested in exploring 2 though, maybe if we start with Parquet support. Right now it fails because `ArrowExamplesIterable` doesn't know how to shard more than num_shards. We could have instead a `ReshardableArrowExamplesIterable` that would pass the right arguments to `_generate_tables()` in parquet.py to only read the data requested for a specific node",
"> ReshardableArrowExamplesIterable\n\nOkay, my datasets are all on my local disk, so I haven't considered the overhead of data download. Are there any tutorials on creating custom iterable datasets? For example, a custom `iterabledataset.__iter__` function can be used to skip data, and it can inherit operations like `iterabledataset.map`."
] | 2025-12-27T16:48:29
| 2025-12-29T16:06:52
| null |
NONE
| null | null | null | null |
### Feature request
Added sharding function support to the streaming IterableDataset, allowing users to adjust the number of shards according to their training resources. For example:
```
dataset = load_dataset(*, stream=True)
dataset = dataset.shard(num_shards=num_shards, num_samples=num_samples) #We may need to know the total number of samples (num_samples) in advance.
```
### Motivation
When performing large-scale pre-training in a distributed environment, large datasets may only be loaded in a streaming manner. To improve training efficiency, my current approach is as follows:
```
file_type="parquet"
dataset_path="./*.parquet"
dataset = load_dataset(file_type,data_files=dataset_path, stream=True)
dataset = split_dataset_by_node(dataset, rank=rank, world_size=world_size)
```
I split a large file into N = world_size * dataloader_num_workers files and placed them under dataset_path. This ensures that each GPU processes different shards. However, this approach has some issues. If the number of GPUs used to train the model changes next time, I need to split the large file again to ensure that IterableDataset.num_shards = world_size * dataloader_num_workers.
I'd like to know if there's a better approach, such as directly loading the large dataset in a streaming manner and then sharding the IterableDataset based on the number of GPUs and num_workers, similar to the approach in Example 1 of https://docs.pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset @lhoestq
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7917/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7917/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7916
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7916/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7916/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7916/events
|
https://github.com/huggingface/datasets/issues/7916
| 3,764,901,707
|
I_kwDODunzps7gZ9dL
| 7,916
|
No description provided.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/61858900?v=4",
"events_url": "https://api.github.com/users/howitry/events{/privacy}",
"followers_url": "https://api.github.com/users/howitry/followers",
"following_url": "https://api.github.com/users/howitry/following{/other_user}",
"gists_url": "https://api.github.com/users/howitry/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/howitry",
"id": 61858900,
"login": "howitry",
"node_id": "MDQ6VXNlcjYxODU4OTAw",
"organizations_url": "https://api.github.com/users/howitry/orgs",
"received_events_url": "https://api.github.com/users/howitry/received_events",
"repos_url": "https://api.github.com/users/howitry/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/howitry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/howitry/subscriptions",
"type": "User",
"url": "https://api.github.com/users/howitry",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
closed
| false
| null |
[] | null |
[] | 2025-12-27T16:33:11
| 2025-12-27T16:45:22
| 2025-12-27T16:45:22
|
NONE
| null | null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/61858900?v=4",
"events_url": "https://api.github.com/users/howitry/events{/privacy}",
"followers_url": "https://api.github.com/users/howitry/followers",
"following_url": "https://api.github.com/users/howitry/following{/other_user}",
"gists_url": "https://api.github.com/users/howitry/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/howitry",
"id": 61858900,
"login": "howitry",
"node_id": "MDQ6VXNlcjYxODU4OTAw",
"organizations_url": "https://api.github.com/users/howitry/orgs",
"received_events_url": "https://api.github.com/users/howitry/received_events",
"repos_url": "https://api.github.com/users/howitry/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/howitry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/howitry/subscriptions",
"type": "User",
"url": "https://api.github.com/users/howitry",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7916/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7916/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7915
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7915/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7915/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7915/events
|
https://github.com/huggingface/datasets/issues/7915
| 3,762,042,396
|
I_kwDODunzps7gPDYc
| 7,915
|
GDPval dataset Word docs corrupted
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/12248575?v=4",
"events_url": "https://api.github.com/users/alexheat/events{/privacy}",
"followers_url": "https://api.github.com/users/alexheat/followers",
"following_url": "https://api.github.com/users/alexheat/following{/other_user}",
"gists_url": "https://api.github.com/users/alexheat/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alexheat",
"id": 12248575,
"login": "alexheat",
"node_id": "MDQ6VXNlcjEyMjQ4NTc1",
"organizations_url": "https://api.github.com/users/alexheat/orgs",
"received_events_url": "https://api.github.com/users/alexheat/received_events",
"repos_url": "https://api.github.com/users/alexheat/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alexheat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexheat/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alexheat",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"tentatively tagging @simonpfish ^\n\n(if it's an option you could enable PRs/Discussions on your dataset on HF)"
] | 2025-12-25T13:56:55
| 2025-12-26T09:06:13
| null |
NONE
| null | null | null | null |
The [openai/gdpval](https://huggingface.co/datasets/openai/gdpval) dataset on Hugging Face contains Word .docx files with two types of corruption that cause Microsoft Word to display an "unreadable content" error.
### Root Causes
1. **Corrupted settings.xml**: The `word/settings.xml` file uses incorrect namespace prefixes (`ns0:`, `ns1:`, etc.) instead of the proper prefixes (`w:`, `mc:`, `m:`, etc.)
2. **Malformed TargetMode attributes**: Some files have `TargetMode="External"` attributes missing their closing `/>` tag in hyperlink relationships
Both issues cause Word to reject the files even though the XML structure is technically valid.
I have a fix for the issue here https://github.com/alexheat/gdpval-docx-fix
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7915/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7915/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7914
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7914/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7914/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7914/events
|
https://github.com/huggingface/datasets/issues/7914
| 3,760,894,100
|
I_kwDODunzps7gKrCU
| 7,914
|
[ROCm] please install 'torchcodec'
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42451412?v=4",
"events_url": "https://api.github.com/users/AndreasKaratzas/events{/privacy}",
"followers_url": "https://api.github.com/users/AndreasKaratzas/followers",
"following_url": "https://api.github.com/users/AndreasKaratzas/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreasKaratzas/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AndreasKaratzas",
"id": 42451412,
"login": "AndreasKaratzas",
"node_id": "MDQ6VXNlcjQyNDUxNDEy",
"organizations_url": "https://api.github.com/users/AndreasKaratzas/orgs",
"received_events_url": "https://api.github.com/users/AndreasKaratzas/received_events",
"repos_url": "https://api.github.com/users/AndreasKaratzas/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AndreasKaratzas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreasKaratzas/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AndreasKaratzas",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"I was able to install torchcodec by building it from source and have put together a PR: https://github.com/vllm-project/vllm/pull/31323\n\nStill I think it would make this framework more robust to add at least one fallback lib (that is more widely used) in place should torchcodec installation fail or library is not found."
] | 2025-12-24T19:39:17
| 2025-12-28T07:25:42
| null |
NONE
| null | null | null | null |
### Describe the bug
Datasets library is widely used by many Python packages. Naturally, it is a requirement on many platforms. This includes vLLM for ROCm. During audio dataset tests, there is an exception triggered:
```python
def decode_example(
self, value: dict, token_per_repo_id: Optional[dict[str, Union[str, bool, None]]] = None
) -> "AudioDecoder":
"""Decode example audio file into audio data.
Args:
value (`dict`):
A dictionary with keys:
- `path`: String with relative audio file path.
- `bytes`: Bytes of the audio file.
token_per_repo_id (`dict`, *optional*):
To access and decode
audio files from private repositories on the Hub, you can pass
a dictionary repo_id (`str`) -> token (`bool` or `str`)
Returns:
`torchcodec.decoders.AudioDecoder`
"""
if config.TORCHCODEC_AVAILABLE:
from ._torchcodec import AudioDecoder
else:
> raise ImportError("To support decoding audio data, please install 'torchcodec'.")
E ImportError: To support decoding audio data, please install 'torchcodec'.
```
At the same time, `torchcodec` cannot be installed on ROCm, because Its GPU acceleration uses NVIDIA's NVDEC (hardware decoder), which is NVIDIA-specific. Therefore, code paths that call this block trigger errors on ROCm. Can you add an alternative package there as fallback instead of an ImportError?
### Steps to reproduce the bug
On a machine with MI300/MI325/MI355:
```bash
pytest -s -v tests/entrypoints/openai/correctness/test_transcription_api_correctness.py::test_wer_correctness[12.74498-D4nt3/esb-datasets-earnings22-validation-tiny-filtered-openai/whisper-large-v3]
```
### Expected behavior
```log
_________________________________________________ test_wer_correctness[12.74498-D4nt3/esb-datasets-earnings22-validation-tiny-filtered-openai/whisper-large-v3] ________________________________________[383/535$
model_name = 'openai/whisper-large-v3', dataset_repo = 'D4nt3/esb-datasets-earnings22-validation-tiny-filtered', expected_wer = 12.74498, n_examples = -1, max_concurrent_request = None
@pytest.mark.parametrize("model_name", ["openai/whisper-large-v3"])
# Original dataset is 20GB+ in size, hence we use a pre-filtered slice.
@pytest.mark.parametrize(
"dataset_repo", ["D4nt3/esb-datasets-earnings22-validation-tiny-filtered"]
)
# NOTE: Expected WER measured with equivalent hf.transformers args:
# whisper-large-v3 + esb-datasets-earnings22-validation-tiny-filtered.
@pytest.mark.parametrize("expected_wer", [12.744980])
def test_wer_correctness(
model_name, dataset_repo, expected_wer, n_examples=-1, max_concurrent_request=None
):
# TODO refactor to use `ASRDataset`
with RemoteOpenAIServer(model_name, ["--enforce-eager"]) as remote_server:
> dataset = load_hf_dataset(dataset_repo)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
tests/entrypoints/openai/correctness/test_transcription_api_correctness.py:160:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/entrypoints/openai/correctness/test_transcription_api_correctness.py:111: in load_hf_dataset
if "duration_ms" not in dataset[0]:
^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py:2876: in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py:2858: in _getitem
formatted_output = format_table(
/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py:658: in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py:411: in __call__
return self.format_row(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py:460: in format_row
row = self.python_features_decoder.decode_row(row)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py:224: in decode_row
return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/features/features.py:2111: in decode_example
column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
/usr/local/lib/python3.12/dist-packages/datasets/features/features.py:1419: in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
### Environment info
- `datasets` version: 4.4.2
- Platform: Linux-5.15.0-161-generic-x86_64-with-glibc2.35
- Python version: 3.12.12
- `huggingface_hub` version: 0.36.0
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.10.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7914/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7914/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7913
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7913/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7913/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7913/events
|
https://github.com/huggingface/datasets/pull/7913
| 3,758,884,376
|
PR_kwDODunzps66aEsF
| 7,913
|
Add lance format support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17097?v=4",
"events_url": "https://api.github.com/users/eddyxu/events{/privacy}",
"followers_url": "https://api.github.com/users/eddyxu/followers",
"following_url": "https://api.github.com/users/eddyxu/following{/other_user}",
"gists_url": "https://api.github.com/users/eddyxu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eddyxu",
"id": 17097,
"login": "eddyxu",
"node_id": "MDQ6VXNlcjE3MDk3",
"organizations_url": "https://api.github.com/users/eddyxu/orgs",
"received_events_url": "https://api.github.com/users/eddyxu/received_events",
"repos_url": "https://api.github.com/users/eddyxu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eddyxu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eddyxu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eddyxu",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Mentioned https://github.com/huggingface/datasets/issues/7863 as well",
"@pdames for vis",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7913). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Cool ! I notice the current implementation doesn't support streaming because of the symlink hack.\r\n\r\nI believe you can do something like this instead:\r\n\r\n```python\r\ndef _generate_tables(self, paths: list[str]):\r\n for path in paths:\r\n ds = lance.dataset(path)\r\n for frag_idx, fragment in enumerate(ds.get_fragments()):\r\n for batch_idx, batch in enumerate(\r\n fragment.to_batches(columns=self.config.columns, batch_size=self.config.batch_size)\r\n ):\r\n table = pa.Table.from_batches([batch])\r\n table = self._cast_table(table)\r\n yield Key(frag_idx, batch_idx), table\r\n```\r\n\r\nnote that path can be a local one, but also a `hf://` URI",
"@lhoestq Take another look? ",
"I took the liberty to make a few changes :)\r\n\r\nNow I believe we should be good:\r\n- both local and streaming work fine\r\n- both dataset and single files work fine\r\n- all files are properly downloaded now than all files and metadata files are included in config.data_files\r\n- sharding is supported:\r\n - dataset: one shard = one fragment\r\n - single files: one shard = one file \r\n- streaming dataset resuming works fine thanks to Key()\r\n- the two hacks are visible and with TODOs to remove them when possible\r\n 1. remove the revision in HF uris since only \"main\" is supported\r\n 2. write proper _version/* files since lance doesn't work if they are symlinks\r\n\r\nI think this PR is ready, just let me know what you think before we merge 🚀 \r\n\r\nThe next steps are:\r\n- open a PR in this repository to document Lance support in `datasets`\r\n- open a PR in https://github.com/huggingface/hub-docs to add `pylance` to the list of integrated library on HF, and have some documentation on how to use it with datasets on HF (here is an example [PR](https://github.com/huggingface/hub-docs/pull/1892))\r\n- open a PR in https://github.com/huggingface/huggingface.js to add Lance as a supported dataset library on the HF website (here is an example [PR](https://github.com/huggingface/huggingface.js/pull/1870))\r\n\r\nFeel free to start some drafts (I noticed there are great examples in your HF account now !), I'll be happy to review :)\r\n\r\nAnd once Lance is available in huggingface.js and docs are ready we'll be ready to enable the Dataset Viewer and Lance code snippets on HF !"
] | 2025-12-24T00:52:20
| 2026-01-09T10:48:29
| 2026-01-09T10:48:29
|
CONTRIBUTOR
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7913.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7913",
"merged_at": "2026-01-09T10:48:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7913.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7913"
}
|
Add lance format as one of the `packaged_modules`.
```py
import datasets
ds = datasets.load_dataset("org/lance_repo", split="train")
# Or
ds = datasets.load_dataset("./local/data.lance")
```
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 3,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7913/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7913/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7912
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7912/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7912/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7912/events
|
https://github.com/huggingface/datasets/pull/7912
| 3,755,023,829
|
PR_kwDODunzps66NQzG
| 7,912
|
fix low but large example indexerror
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7912). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-12-22T19:53:59
| 2026-01-09T13:23:52
| 2026-01-09T13:23:51
|
CONTRIBUTOR
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7912.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7912",
"merged_at": "2026-01-09T13:23:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7912.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7912"
}
|
Fixes #7911.
This PR simply implements the approach outlined in the corresponding issue, that if we have large examples, the number of shards should never be more than the number of samples. This is an absolute edge case, but can happen for image data.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7912/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7912/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7911
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7911/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7911/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7911/events
|
https://github.com/huggingface/datasets/issues/7911
| 3,753,447,559
|
I_kwDODunzps7fuRCH
| 7,911
|
IndexError when saving few large examples to disk
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-12-22T11:33:19
| 2026-01-09T13:23:53
| 2026-01-09T13:23:52
|
CONTRIBUTOR
| null | null | null | null |
### Describe the bug
I ran into this issue when processing a file (900MB) with just one example but simplified for a quicker reproducer below. The problem is that, if `num_shards` is not explicitly set, we calculate it manually using https://github.com/huggingface/datasets/blob/main/src/datasets/utils/py_utils.py#L96 with the default `config.MAX_SHARD_SIZE` which is 500MB. If a single example is now larger than this, we run into an index error since the shards should be processed individually.
An easy workaround is:
`dataset.save_to_disk(output_path, max_shard_size="1GB")` or `dataset.save_to_disk(output_path, num_shards=1)`.
I believe this should be fixed and can happen in edge cases for image data, especially when just testing single partitions. The fix would be rather easy, just using a `num_shards = min(num_examples, <previously_calculated_num_shards>)`
### Steps to reproduce the bug
```python
from datasets import Dataset
target_size = 2 * 1024 * 1024 # 2 MB in bytes
base_text = (
"This is a sample sentence that will be repeated many times to create a large dataset. "
* 100
)
large_text = ""
while len(large_text.encode("utf-8")) < target_size:
large_text += base_text
actual_size = len(large_text.encode("utf-8"))
size_mb = actual_size / (1024 * 1024)
data = {"text": [large_text], "label": [0], "id": [1]}
dataset = Dataset.from_dict(data)
output_path = "./sample_dataset"
# make sure this is split into 2 shards
dataset.save_to_disk(output_path, max_shard_size="1MB")
```
this results in
```
```bash
Saving the dataset (1/3 shards): 100%|████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 162.96 examples/s]
Traceback (most recent call last):
File "/home/tpitters/programming/toy-mmu/create_dataset.py", line 27, in <module>
dataset.save_to_disk(output_path, max_shard_size="1MB")
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 1640, in save_to_disk
for kwargs in kwargs_per_job:
^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 1617, in <genexpr>
"shard": self.shard(num_shards=num_shards, index=shard_idx, contiguous=True),
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 4987, in shard
return self.select(
~~~~~~~~~~~^
indices=indices,
^^^^^^^^^^^^^^^^
...<2 lines>...
writer_batch_size=writer_batch_size,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 562, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/fingerprint.py", line 442, in wrapper
out = func(dataset, *args, **kwargs)
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 4104, in select
return self._select_contiguous(start, length, new_fingerprint=new_fingerprint)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 562, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
~~~~^^^^^^^^^^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/fingerprint.py", line 442, in wrapper
out = func(dataset, *args, **kwargs)
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 4164, in _select_contiguous
_check_valid_indices_value(start, len(self))
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/home/tpitters/programming/toy-mmu/.venv/lib/python3.13/site-packages/datasets/arrow_dataset.py", line 624, in _check_valid_indices_value
raise IndexError(f"Index {index} out of range for dataset of size {size}.")
IndexError: Index 1 out of range for dataset of size 1.
```
### Expected behavior
should pass
### Environment info
datasets==4.4.2
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7911/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7911/timeline
| null |
completed
|
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
https://api.github.com/repos/huggingface/datasets/issues/7910
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7910/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7910/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7910/events
|
https://github.com/huggingface/datasets/pull/7910
| 3,749,894,414
|
PR_kwDODunzps658oGv
| 7,910
|
Enhance cast_column() with cast_kwargs parameter
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/49304833?v=4",
"events_url": "https://api.github.com/users/Moenupa/events{/privacy}",
"followers_url": "https://api.github.com/users/Moenupa/followers",
"following_url": "https://api.github.com/users/Moenupa/following{/other_user}",
"gists_url": "https://api.github.com/users/Moenupa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Moenupa",
"id": 49304833,
"login": "Moenupa",
"node_id": "MDQ6VXNlcjQ5MzA0ODMz",
"organizations_url": "https://api.github.com/users/Moenupa/orgs",
"received_events_url": "https://api.github.com/users/Moenupa/received_events",
"repos_url": "https://api.github.com/users/Moenupa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Moenupa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moenupa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Moenupa",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-20T10:09:11
| 2025-12-20T10:09:11
| null |
NONE
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7910.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7910",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7910.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7910"
}
|
Fixes #7909.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7910/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7910/timeline
| null | null | null | null | true
|
https://api.github.com/repos/huggingface/datasets/issues/7909
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7909/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7909/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7909/events
|
https://github.com/huggingface/datasets/issues/7909
| 3,749,885,131
|
I_kwDODunzps7fgrTL
| 7,909
|
Support cast_kwargs in cast_columns
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/49304833?v=4",
"events_url": "https://api.github.com/users/Moenupa/events{/privacy}",
"followers_url": "https://api.github.com/users/Moenupa/followers",
"following_url": "https://api.github.com/users/Moenupa/following{/other_user}",
"gists_url": "https://api.github.com/users/Moenupa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Moenupa",
"id": 49304833,
"login": "Moenupa",
"node_id": "MDQ6VXNlcjQ5MzA0ODMz",
"organizations_url": "https://api.github.com/users/Moenupa/orgs",
"received_events_url": "https://api.github.com/users/Moenupa/received_events",
"repos_url": "https://api.github.com/users/Moenupa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Moenupa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moenupa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Moenupa",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2025-12-20T10:02:07
| 2025-12-20T10:28:01
| null |
NONE
| null | null | null | null |
### Feature request
expose `cast(**cast_kwargs)` to `cast_column()`
https://github.com/huggingface/datasets/blob/0feb65dd8733191dd2d1e74215b422fc5939a56a/src/datasets/arrow_dataset.py#L2205
### Motivation
`cast_column()` wraps `cast()` function without exposing any `cast()` args. For large multi-modal datasets, e.g.
```py
# a dataset with list[{"bytes"}: b'', ...], much more than one image
load_dataset("MLLM-CL/VTCBench").cast_column("images", List(Image(decode=False)))
```
This would fail due to #6206, #7167, where the default value `1000` for batch size in `cast()` is too large and causes `pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays`.
https://github.com/huggingface/datasets/blob/0feb65dd8733191dd2d1e74215b422fc5939a56a/src/datasets/arrow_dataset.py#L2164-L2205
### Your contribution
#7910
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7909/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7909/timeline
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
| false
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 14