url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
3.67B
| node_id
stringlengths 18
32
| number
int64 1
7.88k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
listlengths 0
30
| created_at
timestamp[ns, tz=UTC]date 2020-04-14 10:18:02
2025-11-26 16:16:56
| updated_at
timestamp[ns, tz=UTC]date 2020-04-27 16:04:17
2025-11-27 11:08:44
| closed_at
timestamp[ns, tz=UTC]date 2020-04-14 12:01:40
2025-11-21 12:31:19
⌀ | author_association
stringclasses 4
values | type
float64 | active_lock_reason
float64 | sub_issues_summary
dict | issue_dependencies_summary
dict | body
stringlengths 0
228k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 4
values | draft
float64 0
1
⌀ | pull_request
dict | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7883
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7883/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7883/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7883/events
|
https://github.com/huggingface/datasets/issues/7883
| 3,668,182,561
|
I_kwDODunzps7apAYh
| 7,883
|
Data.to_csv() cannot be recognized by pylance
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/154290630?v=4",
"events_url": "https://api.github.com/users/xi4ngxin/events{/privacy}",
"followers_url": "https://api.github.com/users/xi4ngxin/followers",
"following_url": "https://api.github.com/users/xi4ngxin/following{/other_user}",
"gists_url": "https://api.github.com/users/xi4ngxin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xi4ngxin",
"id": 154290630,
"login": "xi4ngxin",
"node_id": "U_kgDOCTJJxg",
"organizations_url": "https://api.github.com/users/xi4ngxin/orgs",
"received_events_url": "https://api.github.com/users/xi4ngxin/received_events",
"repos_url": "https://api.github.com/users/xi4ngxin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xi4ngxin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xi4ngxin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xi4ngxin",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-26T16:16:56
| 2025-11-26T16:16:56
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Describe the bug
Hi, everyone ! I am a beginner with datasets.
I am testing reading multiple CSV files from a zip archive. The result of reading the dataset shows success, and it can ultimately be correctly saved to CSV.
Intermediate results:
```
Generating train split: 62973 examples [00:00, 175939.01 examples/s]
DatasetDict({
train: Dataset({
features: ['交易时间\t', '收支方向\t', '业务(产品)种类\t', '交易金额\t', '币种\t', '时点余额\t', '对手方名称\t', '对方机构名称\t', ' 对方钱包ID/账号\t', '交易对手名称\t', '交易对手编号\t', '交易流水号\t', '摘要\t', '附言\t', '备注\t', '用途\t', '客户流水号\t'],
num_rows: 62973
})
})
```
However, Pylance gives me the following error:
```
Cannot access attribute "to_csv" for class "DatasetDict"
Attribute "to_csv" is unknownPylance[reportAttributeAccessIssue](https://github.com/microsoft/pylance-release/blob/main/docs/diagnostics/reportAttributeAccessIssue.md)```
Cannot access attribute "to_csv" for class "IterableDatasetDict"
Attribute "to_csv" is unknownPylance[reportAttributeAccessIssue](https://github.com/microsoft/pylance-release/blob/main/docs/diagnostics/reportAttributeAccessIssue.md)
(method) to_csv: Unknown | ((path_or_buf: datasets.utils.typing.PathLike | BinaryIO, batch_size: int | None = None, num_proc: int | None = None, storage_options: dict[Unknown, Unknown] | None = None, **to_csv_kwargs: Unknown) -> int) | ((path_or_buf: datasets.utils.typing.PathLike | BinaryIO, batch_size: int | None = None, storage_options: dict[Unknown, Unknown] | None = None, **to_csv_kwargs: Unknown) -> int)
```
I ignored the error and continued executing to get the correct result:
```
Dataset({
features: ['交易时间\t', '收支方向\t', '业务(产品)种类\t', '交易金额\t', '币种\t', '时点余额\t', '对手方名称\t', '对方机构名称\t', '对方 钱包ID/账号\t', '交易对手名称\t', '交易对手编号\t', '交易流水号\t', '摘要\t', '附言\t', '备注\t', '用途\t', '客户流水号\t'],
num_rows: 62973
})
```
Since the data volume is small, I manually merged the CSV files, and the final result is consistent with what the program saved.
looks like :
<img width="1264" height="150" alt="Image" src="https://github.com/user-attachments/assets/743540d7-ad8c-4531-ae7e-de71a5243a32" />
### Steps to reproduce the bug
this is my code.
```
from datasets import load_dataset
def main():
url = "data/test.zip"
data_files = {"train": url}
dataset = load_dataset("csv", data_files=data_files,split="train", encoding="gbk", skiprows=2)
# print(dataset)
dataset.to_csv("data/test.csv")
if __name__ == "__main__":
main()
```
### Expected behavior
I want to know why this happens. Is there something wrong with my code?
### Environment info
OS: Windows 11 **upgrade from** OS: Windows_NT x64 10.0.22631
Editor:
VS Code Version: 1.106.2 (user setup)
"datasets" version = "4.4.1"
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7883/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7883/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7882
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7882/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7882/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7882/events
|
https://github.com/huggingface/datasets/issues/7882
| 3,667,664,527
|
I_kwDODunzps7anB6P
| 7,882
|
Inconsistent loading of LFS-hosted files in epfml/FineWeb-HQ dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6270922?v=4",
"events_url": "https://api.github.com/users/Oligou/events{/privacy}",
"followers_url": "https://api.github.com/users/Oligou/followers",
"following_url": "https://api.github.com/users/Oligou/following{/other_user}",
"gists_url": "https://api.github.com/users/Oligou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Oligou",
"id": 6270922,
"login": "Oligou",
"node_id": "MDQ6VXNlcjYyNzA5MjI=",
"organizations_url": "https://api.github.com/users/Oligou/orgs",
"received_events_url": "https://api.github.com/users/Oligou/received_events",
"repos_url": "https://api.github.com/users/Oligou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Oligou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oligou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Oligou",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-26T14:06:02
| 2025-11-26T14:06:02
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Describe the bug
Some files in the `epfml/FineWeb-HQ` dataset fail to load via the Hugging Face `datasets` library.
- xet-hosted files load fine
- LFS-hosted files sometimes fail
Example:
- Fails: https://huggingface.co/datasets/epfml/FineWeb-HQ/blob/main/data/CC-MAIN-2024-26/000_00003.parquet
- Works: https://huggingface.co/datasets/epfml/FineWeb-HQ/blob/main/data/CC-MAIN-2024-42/000_00027.parquet
Discussion: https://huggingface.co/datasets/epfml/FineWeb-HQ/discussions/2
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset(
"epfml/FineWeb-HQ",
data_files="data/CC-MAIN-2024-26/000_00003.parquet",
)
```
Error message:
```
HfHubHTTPError: 403 Forbidden: None.
Cannot access content at: https://cdn-lfs-us-1.hf.co/repos/...
Make sure your token has the correct permissions.
...
<Error><Code>AccessDenied</Code><Message>Access Denied</Message></Error>
```
### Expected behavior
It should load the dataset for all files.
### Environment info
- python 3.10
- datasets 4.4.1
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7882/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7882/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7881
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7881/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7881/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7881/events
|
https://github.com/huggingface/datasets/pull/7881
| 3,667,642,524
|
PR_kwDODunzps61qI8F
| 7,881
|
Fix spurious label column when directories match split names
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/132138786?v=4",
"events_url": "https://api.github.com/users/neha222222/events{/privacy}",
"followers_url": "https://api.github.com/users/neha222222/followers",
"following_url": "https://api.github.com/users/neha222222/following{/other_user}",
"gists_url": "https://api.github.com/users/neha222222/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neha222222",
"id": 132138786,
"login": "neha222222",
"node_id": "U_kgDOB-BHIg",
"organizations_url": "https://api.github.com/users/neha222222/orgs",
"received_events_url": "https://api.github.com/users/neha222222/received_events",
"repos_url": "https://api.github.com/users/neha222222/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neha222222/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neha222222/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neha222222",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-26T13:59:46
| 2025-11-26T13:59:46
| null |
NONE
| null | null | null | null |
Issue - https://github.com/huggingface/datasets/issues/7880
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7881/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7881/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7881.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7881",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7881.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7881"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7880
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7880/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7880/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7880/events
|
https://github.com/huggingface/datasets/issues/7880
| 3,667,561,864
|
I_kwDODunzps7amo2I
| 7,880
|
Spurious label column created when audiofolder/imagefolder directories match split names
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/132138786?v=4",
"events_url": "https://api.github.com/users/neha222222/events{/privacy}",
"followers_url": "https://api.github.com/users/neha222222/followers",
"following_url": "https://api.github.com/users/neha222222/following{/other_user}",
"gists_url": "https://api.github.com/users/neha222222/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neha222222",
"id": 132138786,
"login": "neha222222",
"node_id": "U_kgDOB-BHIg",
"organizations_url": "https://api.github.com/users/neha222222/orgs",
"received_events_url": "https://api.github.com/users/neha222222/received_events",
"repos_url": "https://api.github.com/users/neha222222/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neha222222/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neha222222/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neha222222",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-26T13:36:24
| 2025-11-26T13:36:24
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
## Describe the bug
When using `audiofolder` or `imagefolder` with directories for **splits** (train/test) rather than class labels, a spurious `label` column is incorrectly created.
**Example:** https://huggingface.co/datasets/datasets-examples/doc-audio-4
```
from datasets import load_dataset
ds = load_dataset("datasets-examples/doc-audio-4")
print(ds["train"].features)
```
Shows 'label' column with ClassLabel(names=['test', 'train']) - incorrect!## Root cause
In `folder_based_builder.py`, the `labels` set is accumulated across ALL splits (line 77). When directories are `train/` and `test/`:
- `labels = {"train", "test"}` → `len(labels) > 1` → `add_labels = True`
- Spurious label column is created with split names as class labels
## Expected behavior
No `label` column should be added when directory names match split names.
## Proposed fix
Skip label inference when inferred labels match split names.
cc @lhoestq
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7880/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7880/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7879
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7879/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7879/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7879/events
|
https://github.com/huggingface/datasets/issues/7879
| 3,657,249,446
|
I_kwDODunzps7Z_TKm
| 7,879
|
python core dump when downloading dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5960219?v=4",
"events_url": "https://api.github.com/users/hansewetz/events{/privacy}",
"followers_url": "https://api.github.com/users/hansewetz/followers",
"following_url": "https://api.github.com/users/hansewetz/following{/other_user}",
"gists_url": "https://api.github.com/users/hansewetz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hansewetz",
"id": 5960219,
"login": "hansewetz",
"node_id": "MDQ6VXNlcjU5NjAyMTk=",
"organizations_url": "https://api.github.com/users/hansewetz/orgs",
"received_events_url": "https://api.github.com/users/hansewetz/received_events",
"repos_url": "https://api.github.com/users/hansewetz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hansewetz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hansewetz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hansewetz",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi @hansewetz I'm curious, for me it works just fine. Are you still observing the issue?",
"Yup ... still the same issue.\nHowever, after adding a ```sleep(1)``` call after the ``` for``` loop by accident during debugging, the program terminates properly (not a good solution though ... :-) ).\nAre there some threads created that handles the download that are still running when the program exits?\nHaven't had time yet to go through the code in ```iterable_dataset.py::IterableDataset```\n",
"Interesting, I was able to reproduce it, on a jupyter notebook the code runs just fine, as a Python script indeed it seems to never finish running (which is probably leading to the core dumped error). I'll try and take a look at the source code as well to see if I can figure it out.",
"Hi @hansewetz ,\nIf possible can I be assigned with this issue?\n\n",
"```If possible can I be assigned with this issue?```\nHi, I don't know how assignments work here and who can take decisions about assignments ... ",
"Hi @hansewetz and @Aymuos22, I have made some progress:\n\n1) Confirmed last working version is 3.1.0\n\n2) From 3.1.0 to 3.2.0, there was a change in how parquet files are read (see [here](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/parquet/parquet.py/#168).\n\nThe issue seems to be the following code:\n\n```\nparquet_fragment.to_batches(\n batch_size=batch_size,\n columns=self.config.columns,\n filter=filter_expr,\n batch_readahead=0,\n fragment_readahead=0,\n )\n```\n\nAdding a `use_threads=False` parameter to the `to_batches` call solves the bug. However, this seems far from an optimal solution, since we'd like to be able to use multiple threads for reading the fragments. \n\nI'll keep investigating to see if there's a better solution.",
"Hi @lhoestq, may I ask if the current behaviour was expected by you folks and you don't think it needs solving, or should I keep on investigating a compromise between using multithreading / avoid unexpected behaviour? Thanks in advance :) ",
"Having the same issue. the code never stops executing. Using datasets 4.4.1\nTried with \"islice\" as well. When the streaming flag is True, the code doesn't end execution. On vs-code.",
"The issue on pyarrow side is here: https://github.com/apache/arrow/issues/45214 and the original issue in `datasets` here: https://github.com/huggingface/datasets/issues/7357\n\nIt would be cool to have a fix on the pyarrow side",
"Thank you very much @lhoestq, I'm reading the issue thread in pyarrow and realizing you've been raising awareness around this for a long time now. When I have some time I'll look at @pitrou's PR to see if I can get a better understanding of what's going on on pyarrow. "
] | 2025-11-24T06:22:53
| 2025-11-25T20:45:55
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Describe the bug
When downloading a dataset in streamed mode and exiting the program before the download completes, the python program core dumps when exiting:
```
terminate called without an active exception
Aborted (core dumped)
```
Tested with python 3.12.3, python 3.9.21
### Steps to reproduce the bug
Create python venv:
```bash
python -m venv venv
./venv/bin/activate
pip install datasets==4.4.1
```
Execute the following program:
```
from datasets import load_dataset
ds = load_dataset("HuggingFaceFW/fineweb-2", 'hrv_Latn', split="test", streaming=True)
for sample in ds:
break
```
### Expected behavior
Clean program exit
### Environment info
described above
**note**: the example works correctly when using ```datasets==3.1.0```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7879/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7879/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7878
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7878/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7878/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7878/events
|
https://github.com/huggingface/datasets/pull/7878
| 3,653,262,027
|
PR_kwDODunzps606R81
| 7,878
|
Replace papaya with niivue
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"@CloseChoice thanks for your work on this. As you mentioned, the prime developers for Papaya have moved on, so it is in maintenance mode, albeit it is mature and may fill all your requirements. \r\n\r\nPapaya does reflect the era of its creation, so it uses WebGL1 (which only supports 2D textures) for display and pako for decompression. In contrast, NiiVue uses WebGL2 (where 3D textures provide a native representation for volumes) and compression streams (x4 decoding speed). A major benefit of 3D textures is simple support for 3D volume rendering using ray casting. Note the Papaya README shows an isosurface rendering based on a triangulated mesh. In contrast, NiiVue can show both volume rendering (good for data with fuzzy boundaries) as well as surface rendering (good when a clean isosurface can be defined). I think the [gallery](https://niivue.com/gallery) provides a nice example of NiiVue capabilities as well as minimal recipes.\r\n\r\nI do agree that Papaya UI is more advanced: by design NiiVue is a graphic widget that can be embedded into a container that provides your preferred user interface (React, Angular, Vue, pure html, or even jupyter notebooks). \r\n\r\nI think DICOM support is a challenge for any tool for several reasons: the diversity of the implementations and compression methods (transfer syntaxes), the fact that in classic DICOM each 2D slice is saved as a separate file (though note modern enhanced DICOM can save an entire 3D volume or even 4D timeseries in a single file), and the rate that this format has evolved over time. Papaya uses [Daikon](https://github.com/rii-mango/Daikon) to handle DICOM images, and I think it is only one file at a time. In contrast, NiiVue provides plugins for complex image formats, so you can choose your desired tool. We do provide illustrate how to use [dcm2niix WASM](https://github.com/niivue/niivue-dcm2niix) as a DICOM loader, and it can extract coherent volumes from a random assortment of files or a manifest of files - see the [live demo](https://github.com/niivue/niivue-dcm2niix). Note that diakon development has halted, while dcm2niix is actively maintained, which impacts support for emerging compression methods (e.g. JPEG2000-HT). Having said that, if your primary focus is DICOM, [cornerstonejs](https://www.cornerstonejs.org/) is probably a better choice than NiiVue or Papaya.\r\n\r\nAnother feature that may or may not be worth noting is that NiiVue has a plugin model that allows you to use a lot of mature image processing tools. So you can do image conversion, image processing (itk-wasm, niimath), image registration (flirt, elastix) and edge-based AI models. [brainchop](https://brainchop.org/) illustrates edge-based AI model inference for brain segmentation, extraction and parcellation, though we provide minimal examples for ONNX, tensorflowjs and tinygrad. This would provide a convenient way for huggingface inference models to be shared. After training, the models could be converted to ONNX and deployed on a web page, allowing the user to drag-and-drop images and process them regardless of operating system or graphics card manufacturer. Since the AI model inference leverages the users own graphics card, the privacy issues and hardware scaling concerns of cloud distribution are mitigated.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"@neurolabusc thanks so much for the nuanced and informative reply.\r\nI am convinced that niivue is the better option here, having 3D support is huge and Papaya's UI features are actually not necessary at all, and AFAIS we can get what we need and more with some additional configuration for niivue as well.\r\nThanks a lot for words about DICOM, though the focus of this PR is not NifTI and not DICOM, I think having one tool being able to load both (and potentially more formats) is best, I'll definitely test the live demo. My primary interest in your thoughts about DICOM is to enable visualization as a follow-up to this PR #https://github.com/huggingface/datasets/pull/7835. Even for the DICOM case NiiVue seems like a great option using the [dcm2niix](https://github.com/niivue/niivue-dcm2niix) webassembly plugin, I think the main challenge is here how we let the user organize files in an intuitive way (e.g. provide DICOM folder class, and a DICOM document class where one folder can contain multiple documents and 3d visualization is on the folder level). \r\n\r\nGiven that NiiVue is a modern neuroimaging viewer, well maintained and widely used and we have @neurolabusc attention in case of questions/problems I think we should go ahead with NiiVue.\r\n\r\n@lhoestq your thoughts are highly appreciated.",
"Following the @neurolabusc 's suggestion I updated to [ipyniivue](https://github.com/niivue/ipyniivue?tab=readme-ov-file) which helps so that we don't need to bother with javascript and speeds up load times since ipyniivue comes with a bundled niivue version and therefore avoids to download. Since DICOM is out of the picture for now, I consider this ready to be reviewed.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7878). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-21T22:19:56
| 2025-11-27T11:08:44
| null |
CONTRIBUTOR
| null | null | null | null |
I was contacted by Chris Rorden whose group is developing NiiVue (see https://github.com/niivue/niivue), which leverages WebGL2 (in contrast to Papaya which is WebGL1 based). He also offered support in the implementation, which might come in handy in case of any questions later on (see DICOM implemenation). I completely overlooked NiiVue when searching for frameworks.
Development speed or lack thereof was already mentioned as a potential risk with Papaya. NiiVue is well and actively maintained, simply compare these two contribution charts:
NiiVue:
<img width="920" height="378" alt="image" src="https://github.com/user-attachments/assets/37a0a256-60aa-4758-bb07-97e421c68ae1" />
Papaya:
<img width="920" height="378" alt="image" src="https://github.com/user-attachments/assets/1e1cf0c9-ec0a-4ffc-ae03-a79ea12bcb3b" />
I gave NiiVue a try and it supports all features Papaya does, though I find Papaya's UI slightly more appealing but that is just personal taste. There is also a 3D image of the scanned object included in the NiiVue UI, but that is possible for Papaya aswell (at least in some way, check the image in their github repo README.md).
```python
from datasets import load_dataset
# new dataset compared to papaya PR, this has more interesting images
ds = load_dataset("TobiasPitters/nifti-papaya-testdata",
split="train")
ds[1]['nifti'] # ds[2]['nifti'] is also interesting
```
Here's a brief video how this looks with NiiVue: https://github.com/user-attachments/assets/3f2a52d4-2109-45e2-aca8-e4a4b1e46b32
NOTE: I explicitly created this as draft PR since I suspect the DICOM support to be a crucial factor to decide which of these two is better suited for our needs. DICOM is supported by Papaya, and for NiiVue as well using a plugin, but as far as I understand one DICOM file contains one 2D image, therefore support for loading a whole folder, containing all 2D layers for a complete 3D image is desired. NiiVue supports this according to their docs, I am unsure about Papaya.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7878/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7878/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7878.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7878",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7878.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7878"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7877
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7877/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7877/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7877/events
|
https://github.com/huggingface/datasets/issues/7877
| 3,652,906,788
|
I_kwDODunzps7Zuu8k
| 7,877
|
work around `tempfile` silently ignoring `TMPDIR` if the dir doesn't exist
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-21T19:51:48
| 2025-11-21T19:51:48
| null |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
This should help a lot of users running into `No space left on device` while using `datasets`. Normally the issue is is that `/tmp` is too small and the user needs to use another path, which they would normally set as `export TMPDIR=/some/big/storage`
However, the `tempfile` facility that `datasets` and `pyarrow` use is somewhat broken. If the path doesn't exist it'd ignore it and fall back to using `/tmp`. Watch this:
```
$ export TMPDIR='/tmp/username'
$ python -c "\
import os
import tempfile
print(os.environ['TMPDIR'])
print(tempfile.gettempdir())"
/tmp/username
/tmp
```
Now let's ensure the path exists:
```
$ export TMPDIR='/tmp/username'
$ mkdir -p $TMPDIR
$ python -c "\
import os
import tempfile
print(os.environ['TMPDIR'])
print(tempfile.gettempdir())"
/tmp/username
/tmp/username
```
So I recommend `datasets` do either of the 2:
1. assert if `$TMPDIR` dir doesn't exist, telling the user to create it
2. auto-create it
The reason for (1) is that I don't know why `tempdir` doesn't auto-create the dir - perhaps some security implication? I will let you guys make the decision, but the key is not to let things silently fall through and the user puzzling why no matter what they do they can't break past `No space left on device` while using `datasets`
Thank you.
I found this via https://stackoverflow.com/questions/37229398/python-tempfile-gettempdir-does-not-respect-tmpdir while trying to help a colleague to solve this exact issue.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7877/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7877/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7876
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7876/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7876/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7876/events
|
https://github.com/huggingface/datasets/pull/7876
| 3,652,170,832
|
PR_kwDODunzps602lac
| 7,876
|
test: add verification for HuggingFaceM4/InterleavedWebDocuments
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/122142345?v=4",
"events_url": "https://api.github.com/users/venkatsai2004/events{/privacy}",
"followers_url": "https://api.github.com/users/venkatsai2004/followers",
"following_url": "https://api.github.com/users/venkatsai2004/following{/other_user}",
"gists_url": "https://api.github.com/users/venkatsai2004/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/venkatsai2004",
"id": 122142345,
"login": "venkatsai2004",
"node_id": "U_kgDOB0e-iQ",
"organizations_url": "https://api.github.com/users/venkatsai2004/orgs",
"received_events_url": "https://api.github.com/users/venkatsai2004/received_events",
"repos_url": "https://api.github.com/users/venkatsai2004/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/venkatsai2004/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/venkatsai2004/subscriptions",
"type": "User",
"url": "https://api.github.com/users/venkatsai2004",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-21T15:42:09
| 2025-11-21T15:42:09
| null |
NONE
| null | null | null | null |
Adds an integration test for the `HuggingFaceM4/InterleavedWebDocuments` dataset.
- Gracefully skips if the dataset is not yet available on the Hub
- Checks basic loading and structure once it becomes available
Closes #7394
First-time contributor to `datasets` — really excited about this! Happy to make any adjustments needed. 🙂
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7876/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7876/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7876.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7876",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7876.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7876"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7875
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7875/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7875/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7875/events
|
https://github.com/huggingface/datasets/pull/7875
| 3,649,326,175
|
PR_kwDODunzps60s9my
| 7,875
|
Add quickstart example to datasets README
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/101023542?v=4",
"events_url": "https://api.github.com/users/hajermabrouk/events{/privacy}",
"followers_url": "https://api.github.com/users/hajermabrouk/followers",
"following_url": "https://api.github.com/users/hajermabrouk/following{/other_user}",
"gists_url": "https://api.github.com/users/hajermabrouk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hajermabrouk",
"id": 101023542,
"login": "hajermabrouk",
"node_id": "U_kgDOBgV_Ng",
"organizations_url": "https://api.github.com/users/hajermabrouk/orgs",
"received_events_url": "https://api.github.com/users/hajermabrouk/received_events",
"repos_url": "https://api.github.com/users/hajermabrouk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hajermabrouk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hajermabrouk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hajermabrouk",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-20T22:13:52
| 2025-11-20T22:13:52
| null |
NONE
| null | null | null | null | null | null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7875/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7875/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7875.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7875",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7875.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7875"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7874
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7874/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7874/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7874/events
|
https://github.com/huggingface/datasets/pull/7874
| 3,644,558,046
|
PR_kwDODunzps60c4sg
| 7,874
|
Nifti visualization support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7874). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I tested in Colab and it works perfectly :) now I want to add `_repr_html_` everywhere xD\r\n\r\nRe: testing, I think it's fine to test manually such features"
] | 2025-11-19T21:56:56
| 2025-11-21T12:41:43
| 2025-11-21T12:31:18
|
CONTRIBUTOR
| null | null | null | null |
closes #7870
leverage Papaya to visualize nifti images. For this I created a Wrapper class for `nibabel.nifti1.Nifti1Image` that provides the same interface but exposes an additional `_repr_html_` method, which is needed to visualize the image in jupyter (didn't test in colab, but that should work equivalently).
Code to test (execute in a notebook):
```python
from datasets import load_dataset
ds = load_dataset("TobiasPitters/nifti-nitest-extracted",
split="train")
image = ds[1]
image
```
Here a small video, not the most exciting scan though:
https://github.com/user-attachments/assets/1cca5f01-6fd2-48ef-a4d7-a92c1259c224
Am open to good ways to test this.
EDIT: papaya also supports dicom, didn't test it yet though
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7874/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7874/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7874.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7874",
"merged_at": "2025-11-21T12:31:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7874.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7874"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7873
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7873/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7873/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7873/events
|
https://github.com/huggingface/datasets/pull/7873
| 3,643,993,705
|
PR_kwDODunzps60a_IZ
| 7,873
|
Fix chunk casting and schema unification in dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"\r\n@lhoestq would like to hear from you!\r\n"
] | 2025-11-19T18:43:47
| 2025-11-22T19:51:30
| null |
CONTRIBUTOR
| null | null | null | null |
Updated chunk handling to cast to expected schema when features are provided or to unify schemas when not. This ensures proper schema alignment for the yielded batches.
fixes #7872
This PR fixes a bug where `IterableDataset` created from a generator with explicit `features` parameter would fail during arrow operations (like `.to_pandas()`) when the data contains missing or null values.
## Problem
When an `IterableDataset` is created with explicit features but the generator yields data with missing values (e.g., empty lists), PyArrow would infer different schemas for different batches based on the actual data rather than using the provided schema. This caused `ArrowInvalid` errors when trying to concatenate batches with mismatched schemas.
### Example error:
```python
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
a: int64
b: list
vs
a: int64
b: list>
```
## Solution
Modified `RebatchedArrowExamplesIterable._iter_arrow()` to:
1. Cast chunks to the expected schema when explicit features are provided
2. Unify schemas across chunks when no explicit features are set
3. Gracefully handle cast failures by falling back to the original chunk
This ensures that the user-provided schema is respected throughout the iteration process.
## Testing
Verified the fix with the following test case:
```python
import datasets
from datasets import features
def test_to_pandas_works_with_explicit_schema():
common_features = features.Features(
{
"a": features.Value("int64"),
"b": features.List({"c": features.Value("int64")}),
}
)
def row_generator():
data = [{"a": 1, "b": []}, {"a": 1, "b": [{"c": 1}]}]
for row in data:
yield row
d = datasets.IterableDataset.from_generator(row_generator, features=common_features)
print("Iterating…")
for _ in d.to_pandas():
pass
test_to_pandas_works_with_explicit_schema()
```
Before Patch -
```
@ArjunJagdale ➜ /workspaces/datasets (main) $ python test_arjun.py
Iterating…
Traceback (most recent call last):
File "/workspaces/datasets/test_arjun.py", line 24, in <module>
test_to_pandas_works_with_explicit_schema()
File "/workspaces/datasets/test_arjun.py", line 21, in test_to_pandas_works_with_explicit_schema
for _ in d.to_pandas():
File "/workspaces/datasets/src/datasets/iterable_dataset.py", line 3736, in to_pandas
table = pa.concat_tables(list(self.with_format("arrow").iter(batch_size=1000)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspaces/datasets/src/datasets/iterable_dataset.py", line 2596, in iter
for key, pa_table in iterator:
File "/workspaces/datasets/src/datasets/iterable_dataset.py", line 2111, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
File "/workspaces/datasets/src/datasets/iterable_dataset.py", line 632, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
a: int64
b: list<item: null>
vs
a: int64
b: list<item: struct<c: int64>>
```
After Patch -
```
@ArjunJagdale ➜ /workspaces/datasets (main) $ python test_arjun.py
Iterating…
@ArjunJagdale ➜ /workspaces/datasets (main) $
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7873/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7873/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7873.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7873",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7873.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7873"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7872
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7872/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7872/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7872/events
|
https://github.com/huggingface/datasets/issues/7872
| 3,643,681,893
|
I_kwDODunzps7ZLixl
| 7,872
|
IterableDataset does not use features information in to_pandas
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/790640?v=4",
"events_url": "https://api.github.com/users/bonext/events{/privacy}",
"followers_url": "https://api.github.com/users/bonext/followers",
"following_url": "https://api.github.com/users/bonext/following{/other_user}",
"gists_url": "https://api.github.com/users/bonext/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bonext",
"id": 790640,
"login": "bonext",
"node_id": "MDQ6VXNlcjc5MDY0MA==",
"organizations_url": "https://api.github.com/users/bonext/orgs",
"received_events_url": "https://api.github.com/users/bonext/received_events",
"repos_url": "https://api.github.com/users/bonext/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bonext/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bonext/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bonext",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Created A PR!",
"Another test script that can be used to test the behavior - \n\n```\nimport datasets\nfrom datasets import features\n\ndef test_crash():\n common_features = features.Features({\n \"a\": features.Value(\"int64\"),\n \"b\": features.List({\"c\": features.Value(\"int64\")}),\n })\n\n def row_generator():\n yield {\"a\": 1, \"b\": []}\n yield {\"a\": 1, \"b\": [{\"c\": 1}]}\n\n d = datasets.IterableDataset.from_generator(row_generator, features=common_features)\n\n list(d.to_pandas()) # <-- this triggers the crash\n\n```"
] | 2025-11-19T17:12:59
| 2025-11-19T18:52:14
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Describe the bug
`IterableDataset` created from generator with explicit `features=` parameter seems to ignore provided features description for certain operations, e.g. `.to_pandas(...)` when data coming from the generator has missing values.
### Steps to reproduce the bug
```python
import datasets
from datasets import features
def test_to_pandas_works_with_explicit_schema():
common_features = features.Features(
{
"a": features.Value("int64"),
"b": features.List({"c": features.Value("int64")}),
}
)
def row_generator():
data = [{"a": 1, "b": []}, {"a": 1, "b": [{"c": 1}]}]
for row in data:
yield row
d = datasets.IterableDataset.from_generator(row_generator, features=common_features)
for _ in d.to_pandas():
pass
# _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:3703: in to_pandas
# table = pa.concat_tables(list(self.with_format("arrow").iter(batch_size=1000)))
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:2563: in iter
# for key, pa_table in iterator:
# ^^^^^^^^
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:2078: in _iter_arrow
# for key, pa_table in self.ex_iterable._iter_arrow():
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:599: in _iter_arrow
# yield new_key, pa.Table.from_batches(chunks_buffer)
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# pyarrow/table.pxi:5039: in pyarrow.lib.Table.from_batches
# ???
# pyarrow/error.pxi:155: in pyarrow.lib.pyarrow_internal_check_status
# ???
# _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
# > ???
# E pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
# E a: int64
# E b: list<item: null>
# E vs
# E a: int64
# E b: list<item: struct<c: int64>>
# pyarrow/error.pxi:92: ArrowInvalid
```
### Expected behavior
arrow operations use schema provided through `features=` and not the one inferred from the data
### Environment info
- datasets version: 4.4.1
- Platform: macOS-15.7.1-arm64-arm-64bit-Mach-O
- Python version: 3.13.1
- huggingface_hub version: 1.1.4
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- fsspec version: 2025.10.0
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7872/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7872/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7871
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7871/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7871/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7871/events
|
https://github.com/huggingface/datasets/issues/7871
| 3,643,607,371
|
I_kwDODunzps7ZLQlL
| 7,871
|
Reqwest Error: HTTP status client error (429 Too Many Requests)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"events_url": "https://api.github.com/users/yanan1116/events{/privacy}",
"followers_url": "https://api.github.com/users/yanan1116/followers",
"following_url": "https://api.github.com/users/yanan1116/following{/other_user}",
"gists_url": "https://api.github.com/users/yanan1116/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yanan1116",
"id": 26405281,
"login": "yanan1116",
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"organizations_url": "https://api.github.com/users/yanan1116/orgs",
"received_events_url": "https://api.github.com/users/yanan1116/received_events",
"repos_url": "https://api.github.com/users/yanan1116/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yanan1116/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanan1116/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yanan1116",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"the dataset repo: `https://huggingface.co/datasets/nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim`"
] | 2025-11-19T16:52:24
| 2025-11-19T16:53:07
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Describe the bug
full error message:
```
Traceback (most recent call last):
File "/home/yanan/miniconda3/bin/hf", line 7, in <module>
sys.exit(main())
~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/hf.py", line 56, in main
app()
~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 327, in __call__
raise e
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 310, in __call__
return get_command(self)(*args, **kwargs)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1161, in __call__
return self.main(*args, **kwargs)
~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/core.py", line 803, in main
return _main(
self,
...<6 lines>...
**extra,
)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/core.py", line 192, in _main
rv = self.invoke(ctx)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1697, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 691, in wrapper
return callback(**use_params)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/download.py", line 188, in download
_print_result(run_download())
~~~~~~~~~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/download.py", line 149, in run_download
return snapshot_download(
repo_id=repo_id,
...<10 lines>...
dry_run=dry_run,
)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
return fn(*args, **kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/_snapshot_download.py", line 451, in snapshot_download
thread_map(
~~~~~~~~~~^
_inner_hf_hub_download,
^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
tqdm_class=tqdm_class,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/std.py", line 1181, in __iter__
for obj in iterable:
^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 619, in result_iterator
yield _result_or_cancel(fs.pop())
~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 317, in _result_or_cancel
return fut.result(timeout)
~~~~~~~~~~^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
~~~~~~~~~~~~~~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/_snapshot_download.py", line 431, in _inner_hf_hub_download
hf_hub_download( # type: ignore
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
repo_id,
^^^^^^^^
...<14 lines>...
dry_run=dry_run,
^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
return fn(*args, **kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 986, in hf_hub_download
return _hf_hub_download_to_local_dir(
# Destination
...<16 lines>...
dry_run=dry_run,
)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 1390, in _hf_hub_download_to_local_dir
_download_to_tmp_and_move(
~~~~~~~~~~~~~~~~~~~~~~~~~^
incomplete_path=paths.incomplete_path(etag),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<8 lines>...
tqdm_class=tqdm_class,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 1791, in _download_to_tmp_and_move
xet_get(
~~~~~~~^
incomplete_path=incomplete_path,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<4 lines>...
tqdm_class=tqdm_class,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 571, in xet_get
download_files(
~~~~~~~~~~~~~~^
xet_download_info,
^^^^^^^^^^^^^^^^^^
...<3 lines>...
progress_updater=[progress_updater],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
RuntimeError: Data processing error: CAS service error : Reqwest Error: HTTP status client error (429 Too Many Requests), domain: https://cas-server.xethub.hf.co/reconstructions/04b8a4667b84b3b874a6a2f070cec88920f6289e71185d69fa87e3cf29834710
```
### Steps to reproduce the bug
my command
```bash
hf download nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim --repo-type dataset --include "single_panda_gripper.CoffeePressButton/**" --local-dir /home/yanan/robotics/Isaac-GR00T/gr00t_dataset_official/
```
### Expected behavior
expect the data can be downloaded without any issue
### Environment info
huggingface_hub 1.1.4
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7871/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7871/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7870
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7870/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7870/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7870/events
|
https://github.com/huggingface/datasets/issues/7870
| 3,642,209,953
|
I_kwDODunzps7ZF7ah
| 7,870
|
Visualization for Medical Imaging Datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"It would be amazing to be able to show the Papaya UI in google colab / jupyter notebook. IIRC both allow serving javascript via nbextensions that we can surely use in HTML() objects.\n\nAlternatively we could also start with a simple approach and dump the medical image data as a video file that goes through the slices, so we don't need javascript."
] | 2025-11-19T11:05:39
| 2025-11-21T12:31:19
| 2025-11-21T12:31:19
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
This is a followup to: https://github.com/huggingface/datasets/pull/7815.
I checked the possibilities to visualize the nifti (and potentially dicom), and here's what I found:
- https://github.com/aces/brainbrowser, AGPL3 license, last commit 3 months ago, latest (github) release from 2017. It's available on jsdelivr: https://www.jsdelivr.com/package/npm/brainbrowser (but that is from 2015!)
- https://github.com/rii-mango/Papaya, custom but BSD-style license that would require datasets to list the conditions in their readme somewhere, last commit June 2024. I looked into this library and it looks mature and good enough for our use case, but just working on it for a short time I wasn't able to get this to work, but am sure we could get this working, would probably require some JS on datasets' end. Available on jsdelivr as well: https://www.jsdelivr.com/package/npm/papaya-viewer. Seems like it's frequently loaded.
- https://github.com/hanayik/niivue, BSD3 license, last commit May 26, 2021. Archived. Doesn't look like an option.
I think the only real option for us Papaya, but there is also the risk that we'll end up with an unmaintained package after a while, since development seems to be slow or even halted.
I think conceptually we would need to figure out how we can build a good solution for visualizing Medical Image data. On shap, we have a separate javascript folder in which we render visualizations, this could be a blueprint but will require a bundler, etc. Alternatively one could go with a naive approach to just write some html code in a python string and load the package via jsdelivr.
@lhoestq thoughts?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7870/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7870/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7869
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7869/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7869/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7869/events
|
https://github.com/huggingface/datasets/issues/7869
| 3,636,808,734
|
I_kwDODunzps7YxUwe
| 7,869
|
Why does dataset merge fail when tools have different parameters?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/116297296?v=4",
"events_url": "https://api.github.com/users/hitszxs/events{/privacy}",
"followers_url": "https://api.github.com/users/hitszxs/followers",
"following_url": "https://api.github.com/users/hitszxs/following{/other_user}",
"gists_url": "https://api.github.com/users/hitszxs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hitszxs",
"id": 116297296,
"login": "hitszxs",
"node_id": "U_kgDOBu6OUA",
"organizations_url": "https://api.github.com/users/hitszxs/orgs",
"received_events_url": "https://api.github.com/users/hitszxs/received_events",
"repos_url": "https://api.github.com/users/hitszxs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hitszxs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hitszxs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hitszxs",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-18T08:33:04
| 2025-11-18T08:33:04
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
Hi, I have a question about SFT (Supervised Fine-tuning) for an agent model.
Suppose I want to fine-tune an agent model that may receive two different tools: tool1 and tool2. These tools have different parameters and types in their schema definitions.
When I try to merge datasets containing different tool definitions, I get the following error:
TypeError: Couldn't cast array of type
struct<refundFee: struct<description: string, type: string>, ... , servicerId: struct<description: string, type: string>>
to
{
'refundFee': {'description': Value(dtype='string'), 'type': Value(dtype='string')},
...
'templateId': {'description': Value(dtype='string'), 'type': Value(dtype='string')}
}
From my understanding, the merge fails because the tools column's nested structure is different across datasets — e.g., one struct contains an extra field servicerId while the other does not. This causes HuggingFace Datasets (and its underlying Apache Arrow schema) to reject the merge.
My question is: why is it designed this way?
Is this strict schema matching a hard requirement of the library?
Is there a recommended way to merge datasets with different tool schemas (different parameters and types)?
For an agent model supporting multiple tools, what's the best practice for preparing/merging training data without losing flexibility?
Any guidance or design rationale would be greatly appreciated. Thanks!
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7869/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7869/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7868
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7868/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7868/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7868/events
|
https://github.com/huggingface/datasets/issues/7868
| 3,632,429,308
|
I_kwDODunzps7Ygnj8
| 7,868
|
Data duplication with `split_dataset_by_node` and `interleaved_dataset`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42485228?v=4",
"events_url": "https://api.github.com/users/ValMystletainn/events{/privacy}",
"followers_url": "https://api.github.com/users/ValMystletainn/followers",
"following_url": "https://api.github.com/users/ValMystletainn/following{/other_user}",
"gists_url": "https://api.github.com/users/ValMystletainn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ValMystletainn",
"id": 42485228,
"login": "ValMystletainn",
"node_id": "MDQ6VXNlcjQyNDg1MjI4",
"organizations_url": "https://api.github.com/users/ValMystletainn/orgs",
"received_events_url": "https://api.github.com/users/ValMystletainn/received_events",
"repos_url": "https://api.github.com/users/ValMystletainn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ValMystletainn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ValMystletainn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ValMystletainn",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi @ValMystletainn ,\nCan I be assigned this issue?"
] | 2025-11-17T09:15:24
| 2025-11-25T04:27:05
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Describe the bug
Data duplication in different rank, when process a iterabledataset with first `split_dataset_by_node` and then `interleaved_dataset`
### Steps to reproduce the bug
I have provide a minimum scripts
```python
import os
from datasets import interleave_datasets, load_dataset
from datasets.distributed import split_dataset_by_node
path = "/mnt/wwx/datasets/fineweb/data/CC-MAIN-2013-20/"
files = [os.path.join(path, fn) for fn in os.listdir(path)]
dataset = load_dataset("parquet", split="train", data_files=files, streaming=True)
print(f"{dataset.n_shards=}")
dataset_rank0 = split_dataset_by_node(dataset, 0, 4)
dataset_rank1 = split_dataset_by_node(dataset, 1, 4)
dataset_rank0_interleaved = interleave_datasets([dataset_rank0], seed=42, probabilities=[1.0])
dataset_rank1_interleaved = interleave_datasets([dataset_rank1], seed=42, probabilities=[1.0])
print("print the first sample id from all datasets")
print("dataset", next(iter(dataset))['id'])
print("dataset_rank0", next(iter(dataset_rank0))['id'])
print("dataset_rank1", next(iter(dataset_rank1))['id'])
print("dataset_rank0_interleaved", next(iter(dataset_rank0_interleaved))['id'])
print("dataset_rank1_interleaved", next(iter(dataset_rank1_interleaved))['id'])
dataset_rank0_shard = dataset.shard(4, 0)
dataset_rank1_shard = dataset.shard(4, 1)
dataset_rank0_shard_interleaved = interleave_datasets([dataset_rank0_shard], seed=42, probabilities=[1.0])
dataset_rank1_shard_interleaved = interleave_datasets([dataset_rank1_shard], seed=42, probabilities=[1.0])
print("dataset_rank0_shard", next(iter(dataset_rank0_shard))['id'])
print("dataset_rank1_shard", next(iter(dataset_rank1_shard))['id'])
print("dataset_rank0_shard_interleaved", next(iter(dataset_rank0_shard_interleaved))['id'])
print("dataset_rank1_shard_interleaved", next(iter(dataset_rank1_shard_interleaved))['id'])
```
I just use a subfold of C4 with 14 paruets to do the quick run and get
```
dataset.n_shards=14
print the first sample id from all datasets
dataset <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank0 <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1 <urn:uuid:6b7da64f-c26e-4086-aef5-4b6f01106223>
dataset_rank0_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank0_shard <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1_shard <urn:uuid:67cf7216-dd05-4f55-a28a-1a1c96989c51>
dataset_rank0_shard_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1_shard_interleaved <urn:uuid:67cf7216-dd05-4f55-a28a-1a1c96989c51>
```
### Expected behavior
the first sample of `dataset_rank0_interleaved` and `dataset_rank1_interleaved` should be different, as other `rank0` `rank1` couples.
I have dive into the function and try to find how it work in `split -> interleaved` process.
the `split_dataset_by_node` of iterable dataset does't not change `._ex_iterable` attribute of the dataset. it just set the distributed config in dataset, and the distributed dataset is used in actually `__iter__` call, to handle with shard split or sample skipping.
however, in `interleaved_dataset` of iterable dataset. it copy out all of the `._ex_iterable` of provided datasets, and consist a new `_ex_iterable`, so the missing copy of `distributed config` caused the data duplication in different dp rank.
So I may first ask, is it an unexpected using order of those function, which means:
- always do `split_dataset_by_node` at final rather than in middle way.
- or use `dataset.shard(dp_size, dp_rank)` rather than `split_dataset_by_node` in case similar of mine.
if the using order is permiited, I think it is a bug, and I can do a PR to fix it
(I meet this bug in real training, related issue is https://github.com/ByteDance-Seed/VeOmni/issues/200 if it helps.
### Environment info
datasets 4.4.1
ubuntu 20.04
python 3.11.4
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7868/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7868/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7867
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7867/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7867/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7867/events
|
https://github.com/huggingface/datasets/issues/7867
| 3,620,931,722
|
I_kwDODunzps7X0wiK
| 7,867
|
NonMatchingSplitsSizesError when loading partial dataset files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13678719?v=4",
"events_url": "https://api.github.com/users/QingGo/events{/privacy}",
"followers_url": "https://api.github.com/users/QingGo/followers",
"following_url": "https://api.github.com/users/QingGo/following{/other_user}",
"gists_url": "https://api.github.com/users/QingGo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/QingGo",
"id": 13678719,
"login": "QingGo",
"node_id": "MDQ6VXNlcjEzNjc4NzE5",
"organizations_url": "https://api.github.com/users/QingGo/orgs",
"received_events_url": "https://api.github.com/users/QingGo/received_events",
"repos_url": "https://api.github.com/users/QingGo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/QingGo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QingGo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/QingGo",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"While using verification_mode='no_checks' parameter in load_dataset() can bypass this validation, this solution is not intuitive or convenient for most users, especially those who are not familiar with all the parameters of the load_dataset() function.\n\n```python\nbook_corpus_ds = load_dataset(\n \"SaylorTwift/the_pile_books3_minus_gutenberg\",\n name=\"default\",\n data_files=\"data/train-00000-of-00213-312fd8d7a3c58a63.parquet\",\n split=\"train\",\n cache_dir=\"./data\",\n verification_mode='no_checks'\n)\n```",
"Thanks for the report and reproduction steps @QingGo \n@lhoestq which one of the following looks like a nicer way to handle this?\n\n1] Skip split-size validation entirely for partial loads\nIf the user passes data_files manually and it represents only a subset, then verify_splits() should simply not run, or skip validation only for that split.\n\n2] Replace the error with a warning\n\n3] Automatically detect partial-load cases(i mean we can try this out!)\n\nAssume this, \nIf data_files is provided AND\nthe number of provided files ≠ number of expected files in metadata,\nthen treat it as a partial load and disable strict verification.\n"
] | 2025-11-13T12:03:23
| 2025-11-16T15:39:23
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Describe the bug
When loading only a subset of dataset files while the dataset's README.md contains split metadata, the system throws a NonMatchingSplitsSizesError . This prevents users from loading partial datasets for quick validation in cases of poor network conditions or very large datasets.
### Steps to reproduce the bug
1. Use the Hugging Face `datasets` library to load a dataset with only specific files specified
2. Ensure the dataset repository has split metadata defined in README.md
3. Observe the error when attempting to load a subset of files
```python
# Example code that triggers the error
from datasets import load_dataset
book_corpus_ds = load_dataset(
"SaylorTwift/the_pile_books3_minus_gutenberg",
name="default",
data_files="data/train-00000-of-00213-312fd8d7a3c58a63.parquet",
split="train",
cache_dir="./data"
)
```
### Error Message
```
Traceback (most recent call last):
File "/Users/QingGo/code/llm_learn/src/data/clean_cc_bc.py", line 13, in <module>
book_corpus_ds = load_dataset(
"SaylorTwift/the_pile_books3_minus_gutenberg",
...
File "/Users/QingGo/code/llm_learn/.venv/lib/python3.13/site-packages/datasets/utils/info_utils.py", line 77, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.exceptions.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=106199627990.47722, num_examples=192661, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=454897326, num_examples=905, shard_lengths=None, dataset_name='the_pile_books3_minus_gutenberg')}]
```
### Expected behavior
When loading partial dataset files, the system should:
1. Skip the `NonMatchingSplitsSizesError` validation, OR
2. Only log a warning message instead of raising an error
### Environment info
- `datasets` version: 4.3.0
- Platform: macOS-15.7.1-arm64-arm-64bit-Mach-O
- Python version: 3.13.2
- `huggingface_hub` version: 0.36.0
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.9.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7867/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7867/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7866
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7866/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7866/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7866/events
|
https://github.com/huggingface/datasets/pull/7866
| 3,620,436,248
|
PR_kwDODunzps6zL7Sz
| 7,866
|
docs: add Python version requirement note to installation section
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/222381706?v=4",
"events_url": "https://api.github.com/users/ananthasai-2006/events{/privacy}",
"followers_url": "https://api.github.com/users/ananthasai-2006/followers",
"following_url": "https://api.github.com/users/ananthasai-2006/following{/other_user}",
"gists_url": "https://api.github.com/users/ananthasai-2006/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ananthasai-2006",
"id": 222381706,
"login": "ananthasai-2006",
"node_id": "U_kgDODUFGig",
"organizations_url": "https://api.github.com/users/ananthasai-2006/orgs",
"received_events_url": "https://api.github.com/users/ananthasai-2006/received_events",
"repos_url": "https://api.github.com/users/ananthasai-2006/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ananthasai-2006/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ananthasai-2006/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ananthasai-2006",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-13T09:54:35
| 2025-11-13T09:54:35
| null |
NONE
| null | null | null | null |
Added note about Python version requirement for conda installation.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7866/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7866/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7866.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7866",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7866.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7866"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7865
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7865/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7865/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7865/events
|
https://github.com/huggingface/datasets/pull/7865
| 3,620,116,195
|
PR_kwDODunzps6zK2H_
| 7,865
|
[FEAT] MIDI feature support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/2518789?v=4",
"events_url": "https://api.github.com/users/frascuchon/events{/privacy}",
"followers_url": "https://api.github.com/users/frascuchon/followers",
"following_url": "https://api.github.com/users/frascuchon/following{/other_user}",
"gists_url": "https://api.github.com/users/frascuchon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/frascuchon",
"id": 2518789,
"login": "frascuchon",
"node_id": "MDQ6VXNlcjI1MTg3ODk=",
"organizations_url": "https://api.github.com/users/frascuchon/orgs",
"received_events_url": "https://api.github.com/users/frascuchon/received_events",
"repos_url": "https://api.github.com/users/frascuchon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/frascuchon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frascuchon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/frascuchon",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7865). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-13T08:31:51
| 2025-11-14T13:58:52
| null |
NONE
| null | null | null | null |
This PR adds a new `Midi` feature for reading and importing MIDI files into the datasets.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7865/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7865/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7865.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7865",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7865.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7865"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7864
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7864/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7864/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7864/events
|
https://github.com/huggingface/datasets/issues/7864
| 3,619,137,823
|
I_kwDODunzps7Xt6kf
| 7,864
|
add_column and add_item erroneously(?) require new_fingerprint parameter
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17151810?v=4",
"events_url": "https://api.github.com/users/echthesia/events{/privacy}",
"followers_url": "https://api.github.com/users/echthesia/followers",
"following_url": "https://api.github.com/users/echthesia/following{/other_user}",
"gists_url": "https://api.github.com/users/echthesia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/echthesia",
"id": 17151810,
"login": "echthesia",
"node_id": "MDQ6VXNlcjE3MTUxODEw",
"organizations_url": "https://api.github.com/users/echthesia/orgs",
"received_events_url": "https://api.github.com/users/echthesia/received_events",
"repos_url": "https://api.github.com/users/echthesia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/echthesia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/echthesia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/echthesia",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Take this with a grain of salt, this is just my personal understanding:\nWhile you technically can overwrite the new_fingerprint with a string, e.g.\n```python\nt = d.add_column(\"new_column\", col_value, new_fingerprint=\"dummy_fp\")\nassert t._fingerprint == \"dummy_fp\" # this is true and will pass\n```\nthis is not desired since the fingerprint should be calculated based on the operations (and their arguments) to be unique. This is handled by the [fingerprint_transform](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6077) function which needs a \"new_fingerprint\" keyword argument and creates a unique hash if its value is not set, see [here](https://github.com/huggingface/datasets/blob/main/src/datasets/fingerprint.py#L432). So it is probably safer to not document this keyword, since one doesn't want the user to actually use it and it's only a feature in very limited cases for people really knowing what they are doing. The thing that might be bugging people who read the code is that `new_fingerprint` seems to be required for `add_item` and `add_column` but it is actually set by the decorator (in which's definition it is optional), so maybe changing the signature of `add_item` and `add_column` to `new_fingerprint: Optional[str] = None` would make sense, since this is also how it's handled in the other cases (created by claude):\n\n - [flatten](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2034)\n - [cast_column](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2165)\n - [remove_columns](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2209)\n - [rename_column](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2263)\n - [rename_columns](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2329)\n - [select_columns](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L2397)\n - [batch](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L3760)\n - [filter](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L3813)\n - [flatten_indices](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L3959)\n - [select](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4038)\n - [_select_contiguous](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4128)\n - [sort](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4376)\n - [shuffle](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4506)\n - [train_test_split](https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L4641)\nSo as you mentioned, I believe the methods erronously require the `new_fingerprint` parameter and making them optional is a little consistency win."
] | 2025-11-13T02:56:49
| 2025-11-24T20:33:59
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Describe the bug
Contradicting their documentation (which doesn't mention the parameter at all), both Dataset.add_column and Dataset.add_item require a new_fingerprint string. This parameter is passed directly to the dataset constructor, which has the fingerprint parameter listed as optional; is there any reason it shouldn't be optional in these methods as well?
### Steps to reproduce the bug
Reproduction steps:
1. Look at the function signature for add_column: https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6078
2. Repeat for add_item: https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6336
### Expected behavior
add_column and add_item should either set the fingerprint parameter to optional or include it in their docstrings
### Environment info
Not environment-dependent
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7864/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7864/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7863
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7863/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7863/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7863/events
|
https://github.com/huggingface/datasets/issues/7863
| 3,618,836,821
|
I_kwDODunzps7XsxFV
| 7,863
|
Support hosting lance / vortex / iceberg / zarr datasets on huggingface hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3664715?v=4",
"events_url": "https://api.github.com/users/pavanramkumar/events{/privacy}",
"followers_url": "https://api.github.com/users/pavanramkumar/followers",
"following_url": "https://api.github.com/users/pavanramkumar/following{/other_user}",
"gists_url": "https://api.github.com/users/pavanramkumar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pavanramkumar",
"id": 3664715,
"login": "pavanramkumar",
"node_id": "MDQ6VXNlcjM2NjQ3MTU=",
"organizations_url": "https://api.github.com/users/pavanramkumar/orgs",
"received_events_url": "https://api.github.com/users/pavanramkumar/received_events",
"repos_url": "https://api.github.com/users/pavanramkumar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pavanramkumar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pavanramkumar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pavanramkumar",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[
"Kudos!",
"So cool ! Would love to see support for lance :)",
"@lhoestq thanks for your support! Any suggestions across `datasets` or `huggingface_hub` projects to make this happen?\n\nI just noticed this blog post: https://huggingface.co/blog/streaming-datasets\n\nDo you know if `hfFileSystem` from `huggingface_hub` is flexible enough to accommodate lance? I don't want to `open` and scan a file, I want to create generators with the `lance.dataset.to_batches()` from each fragment (partition) that I can iterate over in a distributed dataloader.\n\nIdeally, something like this should just work:\n\n```\nimport lance\nlance_ds_path = f\"hf://datasets/{dataset_id}/{path_in_repo}.lance\"\nds = lance.dataset(lance_ds_path)\nfragments = ds.get_fragments()\nfragment_generators = []\nfor fragment in fragments:\n fragment_generators = fragment.to_batches()\n```\n\nLooking at the huggingface blog post, I think we might need a PR into `pyarrow` to create a `LanceFragmentScanOptions` class that subclasses [pyarrow.dataset.FragmentScanOptions](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.FragmentScanOptions.html#pyarrow.dataset.FragmentScanOptions) cc @prrao87, @changhiskhan",
"> Do you know if HfFileSystem from huggingface_hub is flexible enough to accommodate lance?\n\nit provides file-like objects for files on HF, and works using range requests. PyArrow uses HfFileSystem for HF files already\n\nThough in the Parquet / PyArrow case the data is read generally row group per row group (using range requests with a minimum size `range_size_limit ` to optimize I/O in case of small row groups)\n\nPS: there is an equivalent to HfFileSystem in rust in OpenDAL, but it only supports read from HF, not write (yet ?)\n\n> I don't want to open and scan a file, I want to create generators with the lance.dataset.to_batches() from each fragment (partition) that I can iterate over in a distributed dataloader.\n\nWe do something very similar for Parquet here: \n\nhttps://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/packaged_modules/parquet/parquet.py#L168-L169",
"Hi, I work on the Lance project. We'd be happy to see the format supported on huggingface hub.\n\nIt's not clear to me from this thread what is required for that. Could we clarify that? Are there examples we can point to?\n\n> I think we might need a PR into `pyarrow` to create a `LanceFragmentScanOptions` class that subclasses [pyarrow.dataset.FragmentScanOptions](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.FragmentScanOptions.html#pyarrow.dataset.FragmentScanOptions)\n\nCould you elaborate why a `FragmentScanOptions` subclass is required? Also, if it is, we could just define that as a subclass within the `pylance` module, unless I'm missing something.\n\nLance supports OpenDAL storage, so I think we could add support for huggingface's filesystem through that and make sure it's exposed in pylance. Could also help implement some write operations. Perhaps that's the main blocker? ",
"> PS: there is an equivalent to HfFileSystem in rust in OpenDAL, but it only supports read from HF, not write (yet ?)\n\nHi, I’m willing to add full-fledged support for the HF file system. This shouldn’t be considered a blocker. 🤟 ",
"Exposing the existing HF filesystem from OpenDAL in pylance would be great ! and a good first step\n\nExcited for write operations too",
"Thanks @lhoestq @wjones127 @Xuanwo ! I think we have all the necessary people on this thread now to make it happen :)\n\n> Could you elaborate why a FragmentScanOptions subclass is required? Also, if it is, we could just define that as a subclass within the pylance module, unless I'm missing something.\n\n@wjones127 I'm not actually sure this is needed but I'm guessing based on [this blog post](https://huggingface.co/blog/streaming-datasets) from a couple of weeks ago. Specifically, this section which allows creation of a dataset object with configurable prefetching:\n\n```\nimport pyarrow\nimport pyarrow.dataset\n\nfragment_scan_options = pyarrow.dataset.ParquetFragmentScanOptions(\n cache_options=pyarrow.CacheOptions(\n prefetch_limit=1,\n range_size_limit=128 << 20\n ),\n)\nds = load_dataset(parquet_dataset_id, streaming=True, fragment_scan_options=fragment_scan_options)\n```\n\nI might be completely wrong that we do need an equivalent `LanceFragmentScanOptions` PR into `pyarrow` and the `OpenDAL` path might be sufficient.\n\nI really just want something like this to work out of the box:\n\n```\nimport lance\nlance_ds_path = f\"hf://datasets/{dataset_id}/{path_in_repo}.lance\"\nds = lance.dataset(lance_ds_path)\nfragments = ds.get_fragments()\nfragment_generators = []\nfor fragment in fragments:\n fragment_generators = fragment.to_batches()\n```\n\nIn the ideal case, I'd like to be able to control prefetch configuration via arguments to `to_batches()` like the ones that already exist for a lance dataset on any S3-compatible object store.\n\nWould a useful approach be to create a toy lance dataset on huggingface and see if this \"just works\"; then work backwards from there?\n\nAs for writing, I'm looking to migrate datasets from my own private S3-compatible object store bucket (Tigris Data) to huggingface datasets but ~~I'm 100% sure~~ I'm _not_ 100% sure whether we even need `hfFileSystem` compatible write capability\n\n\n",
"Here's a public dataset which could be a working example to work backwards from:\n\nhttps://huggingface.co/datasets/pavan-ramkumar/test-slaf\n\npylance currently looks for default object store backends and returns this `ValueError`\n\n```\n>>> import lance\n>>> hf_path = \"hf://datasets/pavan-ramkumar/test-slaf/tree/main/synthetic_50k_processed_v21.slaf/expression.lance\"\n>>> ds = lance.dataset(hf_path)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/Users/pavan/slaf-project/slaf/.venv/lib/python3.12/site-packages/lance/__init__.py\", line 145, in dataset\n ds = LanceDataset(\n ^^^^^^^^^^^^^\n File \"/Users/pavan/slaf-project/slaf/.venv/lib/python3.12/site-packages/lance/dataset.py\", line 425, in __init__\n self._ds = _Dataset(\n ^^^^^^^^^\nValueError: Invalid user input: No object store provider found for scheme: 'hf'\nValid schemes: gs, memory, s3, az, file-object-store, file, oss, s3+ddb, /Users/runner/work/lance/lance/rust/lance-io/src/object_store/providers.rs:161:54\n```",
"@Xuanwo @wjones127 just checking in to see if you had a chance to add a huggingface provider via opendal to pylance. I'm assuming we need a new `huggingface.rs` provider [here](https://github.com/lance-format/lance/tree/4d9c1a4d459ea486556de0ee90828a442d0425b0/rust/lance-io/src/object_store/providers).\n\nDo let me know if I can do anything to help, really excited to help stream lance datasets from huggingface hub",
"> @Xuanwo @wjones127 just checking in to see if you had a chance to add a huggingface provider via opendal to pylance. I'm assuming we need a new `huggingface.rs` provider [here](https://github.com/lance-format/lance/tree/4d9c1a4d459ea486556de0ee90828a442d0425b0/rust/lance-io/src/object_store/providers).\n> \n> Do let me know if I can do anything to help, really excited to help stream lance datasets from huggingface hub\n\nI'm willing to work on this! Would you like to create an issue on lance side and ping me there?",
" > I'm willing to work on this! Would you like to create an issue on lance side and ping me there?\n\nDone! [Link](https://github.com/lance-format/lance/issues/5346)\n",
"@pavanramkumar pls check this out once it's merged! https://github.com/lance-format/lance/pull/5353"
] | 2025-11-13T00:51:07
| 2025-11-26T14:10:29
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Feature request
Huggingface datasets has great support for large tabular datasets in parquet with large partitions. I would love to see two things in the future:
- equivalent support for `lance`, `vortex`, `iceberg`, `zarr` (in that order) in a way that I can stream them using the datasets library
- more fine-grained control of streaming, so that I can stream at the partition / shard level
### Motivation
I work with very large `lance` datasets on S3 and often require random access for AI/ML applications like multi-node training. I was able to achieve high throughput dataloading on a lance dataset with ~150B rows by building distributed dataloaders that can be scaled both vertically (until i/o and CPU are saturated), and then horizontally (to workaround network bottlenecks).
Using this strategy I was able to achieve 10-20x the throughput of the streaming data loader from the `huggingface/datasets` library.
I realized that these would be great features for huggingface to support natively
### Your contribution
I'm not ready yet to make a PR but open to it with the right pointers!
| null |
{
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 5,
"hooray": 2,
"laugh": 2,
"rocket": 8,
"total_count": 23,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7863/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7863/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7862
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7862/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7862/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7862/events
|
https://github.com/huggingface/datasets/pull/7862
| 3,617,947,090
|
PR_kwDODunzps6zDjEj
| 7,862
|
Add flatten_indices option to save_to_disk method
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"as said by @KCKawalkar used below script to test - \r\n\r\nBEFORE PATCH - \r\nTEST.PY:\r\n```\r\nfrom datasets import Dataset\r\nimport time\r\n\r\ndataset = Dataset.from_dict({'text': [f'sample {i}' for i in range(100000)]})\r\n\r\n# Baseline save (no indices)\r\nstart = time.time()\r\ndataset.save_to_disk('baseline')\r\nbaseline_time = time.time() - start\r\n\r\n# Filtered save (creates indices)\r\nfiltered = dataset.filter(lambda x: True)\r\nstart = time.time()\r\nfiltered.save_to_disk('filtered')\r\nfiltered_time = time.time() - start\r\n\r\nprint(f\"Baseline: {baseline_time:.3f}s\")\r\nprint(f\"Filtered: {filtered_time:.3f}s\")\r\nprint(f\"Slowdown: {(filtered_time/baseline_time-1)*100:.1f}%\")\r\n```\r\nRESULTS:\r\n```\r\n@ArjunJagdale ➜ /workspaces/datasets (main) $ python test_arjun.py\r\nSaving the dataset (1/1 shards): 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 100000/100000 [00:00<00:00, 3030654.07 examples/s]\r\nFilter: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100000/100000 [00:00<00:00, 576296.61 examples/s]\r\nSaving the dataset (1/1 shards): 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 100000/100000 [00:00<00:00, 310565.19 examples/s]\r\nBaseline: 0.035s\r\nFiltered: 0.323s\r\nSlowdown: 813.4%\r\n```\r\n\r\nAFTER PATCH - \r\nTEST.PY:\r\n```\r\nfrom datasets import Dataset\r\nimport time\r\n\r\n# Create dataset\r\ndataset = Dataset.from_dict({'text': [f'sample {i}' for i in range(100000)]})\r\n\r\n# Baseline save (no indices)\r\nstart = time.time()\r\ndataset.save_to_disk('baseline')\r\nbaseline_time = time.time() - start\r\n\r\n# Filtered save (creates indices)\r\nfiltered = dataset.filter(lambda x: True)\r\nstart = time.time()\r\nfiltered.save_to_disk('filtered', flatten_indices=False)\r\nfiltered_time = time.time() - start\r\n\r\nprint(f\"Baseline: {baseline_time:.3f}s\")\r\nprint(f\"Filtered: {filtered_time:.3f}s\") \r\nprint(f\"Slowdown: {(filtered_time/baseline_time-1)*100:.1f}%\")\r\n```\r\n\r\nREESULT:\r\n```\r\n@ArjunJagdale ➜ /workspaces/datasets (main) $ python test_arjun.py\r\nSaving the dataset (1/1 shards): 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 100000/100000 [00:00<00:00, 3027482.12 examples/s]\r\nFilter: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 100000/100000 [00:00<00:00, 468901.89 examples/s]\r\nSaving the dataset (1/1 shards): 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 100000/100000 [00:00<00:00, 324036.36 examples/s]\r\nBaseline: 0.036s\r\nFiltered: 0.310s\r\nSlowdown: 771.1%\r\n\r\n```"
] | 2025-11-12T19:38:51
| 2025-11-12T19:50:20
| null |
CONTRIBUTOR
| null | null | null | null |
Added flatten_indices parameter to control index flattening during dataset saving.
Solves #7861
This PR introduces a new optional argument, flatten_indices, to the save_to_disk methods in both Dataset and DatasetDict.
The change allows users to skip the expensive index-flattening step when saving datasets that already use index mappings (e.g., after filter() or shuffle()), resulting in significant speed improvements for large datasets while maintaining backward compatibility.
While not a huge absolute difference at 100K rows, the improvement scales significantly with larger datasets (millions of rows).
This patch gives users control — they can disable flattening when they don’t need it, avoiding unnecessary rewrites.
@lhoestq WDYT?
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7862/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7862/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7862.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7862",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7862.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7862"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7861
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7861/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7861/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7861/events
|
https://github.com/huggingface/datasets/issues/7861
| 3,611,821,713
|
I_kwDODunzps7XSAaR
| 7,861
|
Performance Issue: save_to_disk() 200-1200% slower due to unconditional flatten_indices()
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/222552287?v=4",
"events_url": "https://api.github.com/users/KCKawalkar/events{/privacy}",
"followers_url": "https://api.github.com/users/KCKawalkar/followers",
"following_url": "https://api.github.com/users/KCKawalkar/following{/other_user}",
"gists_url": "https://api.github.com/users/KCKawalkar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/KCKawalkar",
"id": 222552287,
"login": "KCKawalkar",
"node_id": "U_kgDODUPg3w",
"organizations_url": "https://api.github.com/users/KCKawalkar/orgs",
"received_events_url": "https://api.github.com/users/KCKawalkar/received_events",
"repos_url": "https://api.github.com/users/KCKawalkar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/KCKawalkar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KCKawalkar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/KCKawalkar",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-11T11:05:38
| 2025-11-11T11:05:38
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
## 🐛 Bug Description
The `save_to_disk()` method unconditionally calls `flatten_indices()` when `_indices` is not None, causing severe performance degradation for datasets processed with filtering, shuffling, or multiprocessed mapping operations.
**Root cause**: This line rebuilds the entire dataset unnecessarily:
```python
dataset = self.flatten_indices() if self._indices is not None else self
```
## 📊 Performance Impact
| Dataset Size | Operation | Save Time | Slowdown |
|-------------|-----------|-----------|----------|
| 100K | Baseline (no indices) | 0.027s | - |
| 100K | Filtered (with indices) | 0.146s | **+431%** |
| 100K | Shuffled (with indices) | 0.332s | **+1107%** |
| 250K | Shuffled (with indices) | 0.849s | **+1202%** |
## 🔄 Reproduction
```python
from datasets import Dataset
import time
# Create dataset
dataset = Dataset.from_dict({'text': [f'sample {i}' for i in range(100000)]})
# Baseline save (no indices)
start = time.time()
dataset.save_to_disk('baseline')
baseline_time = time.time() - start
# Filtered save (creates indices)
filtered = dataset.filter(lambda x: True)
start = time.time()
filtered.save_to_disk('filtered')
filtered_time = time.time() - start
print(f"Baseline: {baseline_time:.3f}s")
print(f"Filtered: {filtered_time:.3f}s")
print(f"Slowdown: {(filtered_time/baseline_time-1)*100:.1f}%")
```
**Expected output**: Filtered dataset is 400-1000% slower than baseline
## 💡 Proposed Solution
Add optional parameter to control flattening:
```python
def save_to_disk(self, dataset_path, flatten_indices=True):
dataset = self.flatten_indices() if (self._indices is not None and flatten_indices) else self
# ... rest of save logic
```
**Benefits**:
- ✅ Immediate performance improvement for users who don't need flattening
- ✅ Backwards compatible (default behavior unchanged)
- ✅ Simple implementation
## 🌍 Environment
- **datasets version**: 2.x
- **Python**: 3.10+
- **OS**: Linux/macOS/Windows
## 📈 Impact
This affects **most ML preprocessing workflows** that filter/shuffle datasets before saving. Performance degradation scales exponentially with dataset size, making it a critical bottleneck for production systems.
## 🔗 Additional Resources
We have comprehensive test scripts demonstrating this across multiple scenarios if needed for further investigation.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7861/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7861/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7860
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7860/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7860/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7860/events
|
https://github.com/huggingface/datasets/pull/7860
| 3,610,706,034
|
PR_kwDODunzps6yrHQN
| 7,860
|
Support loading local arrow datasets via load_dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/16986130?v=4",
"events_url": "https://api.github.com/users/gstrat88/events{/privacy}",
"followers_url": "https://api.github.com/users/gstrat88/followers",
"following_url": "https://api.github.com/users/gstrat88/following{/other_user}",
"gists_url": "https://api.github.com/users/gstrat88/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gstrat88",
"id": 16986130,
"login": "gstrat88",
"node_id": "MDQ6VXNlcjE2OTg2MTMw",
"organizations_url": "https://api.github.com/users/gstrat88/orgs",
"received_events_url": "https://api.github.com/users/gstrat88/received_events",
"repos_url": "https://api.github.com/users/gstrat88/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gstrat88/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gstrat88/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gstrat88",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-11T04:58:33
| 2025-11-11T20:58:46
| null |
NONE
| null | null | null | null |
Load_dataset will handle local saved datasets that way
#7018
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7860/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7860/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7860.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7860",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7860.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7860"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7859
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7859/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7859/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7859/events
|
https://github.com/huggingface/datasets/pull/7859
| 3,608,586,063
|
PR_kwDODunzps6yj-aZ
| 7,859
|
fix some broken links
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/julien-c",
"id": 326577,
"login": "julien-c",
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"repos_url": "https://api.github.com/users/julien-c/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"type": "User",
"url": "https://api.github.com/users/julien-c",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7859). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-10T15:34:46
| 2025-11-10T17:11:07
| 2025-11-10T17:11:05
|
MEMBER
| null | null | null | null |
would be cool to automate finding those broken links as i think there might be many of them @lhoestq @albertvillanova
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7859/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7859/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7859.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7859",
"merged_at": "2025-11-10T17:11:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7859.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7859"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7858
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7858/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7858/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7858/events
|
https://github.com/huggingface/datasets/pull/7858
| 3,605,471,548
|
PR_kwDODunzps6yZq4r
| 7,858
|
Support downloading specific splits in `load_dataset`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"@CloseChoice This looks great! You're absolutely right about the missing comparison - that's a critical bug I missed. "
] | 2025-11-09T20:44:00
| 2025-11-11T08:04:14
| null |
CONTRIBUTOR
| null | null | null | null |
This is PR builds on top of #7706 to revive the unfinished #6832 but isn't just cleaning up things, here are some important changes:
- `download_mode="FORCE_REDOWNLOAD"` is interpreted as always creating a clean slate, that means that even if we already did:
```python
load_dataset("<name>")
load_dataset("<name>", split="train", download_mode="force_redownload")
```
This makes sure that only the train dataset is available after executing both. This was different in the original PR, which proposed that train and test would be available.
- `download_mode="REUSE_DATASET_IF_EXISTS"` is interpreted as only ever adding new data, never redownloading OR deleting other splits. This was different in the original PR, where
```python
load_dataset("<name>", split="test")
load_dataset("<name>", split="train")
```
resulted in only the train data being available, which I deem very unintuitive and probably not what users want. Also I argue that this is just the first step to a more user friendly partial loading when specifying percentages (or maybe even single instances) via the ReadInstructions, and then doing
```python
load_dataset("<name>", split="test[:10%]")
load_dataset("<name>", split="test[10%:]")
```
should result IMO in the whole dataset being cached locally without redownloads.
Furthermore this PR fixes a couple issues with the previous PR, e.g. a [missing comparison](https://github.com/huggingface/datasets/pull/7706/files#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R877) and adding tests for the proposed changes in behaviour, which would both fail on @ArjunJagdale's original PR.
Todo:
- [ ] update docs?
Future outlook (just my opinions and up for debate):
As mentioned before, I would see this as just a step towards the feature of partial percentage loading (though how the API should behave in that case is not entirely clear for me now) and maybe we could introduce another `download_mode="FORCE_REDOWNLOAD_SPLIT"`, which makes sure that even if a split is specified, only the referenced split is redownloaded and everything else left unchanged, this would then allow users more granular control over what they want to redownload.
@lhoestq very curious to get your opinion on this.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7858/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7858/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7858.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7858",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7858.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7858"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7856
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7856/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7856/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7856/events
|
https://github.com/huggingface/datasets/issues/7856
| 3,603,729,142
|
I_kwDODunzps7WzIr2
| 7,856
|
Missing transcript column when loading a local dataset with "audiofolder"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10166907?v=4",
"events_url": "https://api.github.com/users/gweltou/events{/privacy}",
"followers_url": "https://api.github.com/users/gweltou/followers",
"following_url": "https://api.github.com/users/gweltou/following{/other_user}",
"gists_url": "https://api.github.com/users/gweltou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gweltou",
"id": 10166907,
"login": "gweltou",
"node_id": "MDQ6VXNlcjEwMTY2OTA3",
"organizations_url": "https://api.github.com/users/gweltou/orgs",
"received_events_url": "https://api.github.com/users/gweltou/received_events",
"repos_url": "https://api.github.com/users/gweltou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gweltou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gweltou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gweltou",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"First bad commit 5c8869f8c36dbc8c8d423030b7b7c4fd64f8c729\n\nEDIT: This is not a bug or a regression. It was a breaking change introduced in the commit I mentioned and was also documented in there. The docs state how to handle this now, see https://huggingface.co/docs/datasets/main/en/audio_load#audiofolder-with-metadata\n\nor simply, move your metadata into the splits folder and update the paths, in your case this would look like this:\n```bash\nmy_dataset/\n - data/\n - test/\n - 54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3\n - 54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3\n - metadata.jsonl\n```\n\nand the pahts in the jsonl should be relative to the metadata.json:\n```bash\n{\"file_name\": \"54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3\", \"transcript\": \"Ata tudoù penaos e tro ar bed ?\"}\n{\"file_name\": \"54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3\", \"transcript\": \"Ur gwir blijadur eo adkavout ac'hanoc'h hiziv.\"}\n...\n```\n\nSo I think this can be closed.",
"Thank you for your quick answer !\nI'm sorry I missed that in the documentation.\nEverything works fine again after following your recommendations.\nI'm closing the issue."
] | 2025-11-08T16:27:58
| 2025-11-09T12:13:38
| 2025-11-09T12:13:38
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Describe the bug
My local dataset is not properly loaded when using `load_dataset("audiofolder", data_dir="my_dataset")` with a `jsonl` metadata file.
Only the `audio` column is read while the `transcript` column is not.
The last tested `datasets` version where the behavior was still correct is 2.18.0.
### Steps to reproduce the bug
Dataset directory structure:
```
my_dataset/
- data/
- test/
- 54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3
- 54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3
- ...
- metadata.jsonl
```
`metadata.jsonl` file content:
```
{"file_name": "data/test/54db8760de3cfbff3c8a36a36b4d0f77_00390.0_04583.0.mp3", "transcript": "Ata tudoù penaos e tro ar bed ?"}
{"file_name": "data/test/54db8760de3cfbff3c8a36a36b4d0f77_04583.0_05730.0.mp3", "transcript": "Ur gwir blijadur eo adkavout ac'hanoc'h hiziv."}
...
```
```python3
my_dataset = load_dataset("audiofolder", data_dir="my_dataset")
print(my_dataset)
'''
DatasetDict({
test: Dataset({
features: ['audio'],
num_rows: 347
})
})
'''
print(my_dataset['test'][0])
'''
{'audio': <datasets.features._torchcodec.AudioDecoder object at 0x75ffcd172510>}
'''
```
### Expected behavior
Being able to access the `transcript` column in the loaded dataset.
### Environment info
- `datasets` version: 4.4.1
- Platform: Linux-6.5.0-45-generic-x86_64-with-glibc2.39
- Python version: 3.13.9
- `huggingface_hub` version: 1.1.2
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.10.0
Note: same issue with `datasets` v3.6.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10166907?v=4",
"events_url": "https://api.github.com/users/gweltou/events{/privacy}",
"followers_url": "https://api.github.com/users/gweltou/followers",
"following_url": "https://api.github.com/users/gweltou/following{/other_user}",
"gists_url": "https://api.github.com/users/gweltou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gweltou",
"id": 10166907,
"login": "gweltou",
"node_id": "MDQ6VXNlcjEwMTY2OTA3",
"organizations_url": "https://api.github.com/users/gweltou/orgs",
"received_events_url": "https://api.github.com/users/gweltou/received_events",
"repos_url": "https://api.github.com/users/gweltou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gweltou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gweltou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gweltou",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7856/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7856/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7855
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7855/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7855/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7855/events
|
https://github.com/huggingface/datasets/pull/7855
| 3,602,216,153
|
PR_kwDODunzps6yPIRy
| 7,855
|
ArXiv -> HF Papers
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qgallouedec",
"id": 45557362,
"login": "qgallouedec",
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qgallouedec",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7855). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-07T22:16:36
| 2025-11-10T15:01:13
| 2025-11-10T15:01:13
|
MEMBER
| null | null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7855/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7855/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7855.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7855",
"merged_at": "2025-11-10T15:01:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7855.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7855"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7854
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7854/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7854/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7854/events
|
https://github.com/huggingface/datasets/pull/7854
| 3,596,750,849
|
PR_kwDODunzps6x8yiy
| 7,854
|
[Distributed] split_dataset_by_node() gives the same number of examples for each node
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7854). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Making this work with multiple workers could create a lot of communication for not a lot of benefits, considering you can simply use `Join()` to let nodes shutdown when they run out of data while the other nodes continue training: https://docs.pytorch.org/docs/stable/distributed.algorithms.join.html"
] | 2025-11-06T17:14:18
| 2025-11-10T14:57:44
| null |
MEMBER
| null | null | null | null |
this works:
```python
import torch.distributed as dist
from datasets import IterableDataset
from datasets.distributed import split_dataset_by_node
from collections import Counter
def g(shards):
for shard in shards:
# shards don't have the same length
num_examples = 3 + shard
for i in range(num_examples):
yield {"shard": f"{shard=}", "i": i}
if __name__ == "__main__":
dist.init_process_group(backend="gloo")
rank, world_size = dist.get_rank(), dist.get_world_size()
num_shards = 6
ds = IterableDataset.from_generator(g, gen_kwargs={"shards": list(range(num_shards))})
ds = split_dataset_by_node(ds, rank=rank, world_size=world_size)
# Check that each rank has the same number of examples
# and show the number of examples per shard and per rank
counter = Counter(ds["shard"])
print(f"# {rank=}\ttotal={counter.total()}\t{counter}", flush=True)
# torchrun --nproc_per_node 2 script.py
# rank=0 total=16 Counter({'shard=4': 7, 'shard=2': 5, 'shard=0': 4})
# rank=1 total=16 Counter({'shard=3': 6, 'shard=5': 6, 'shard=1': 4})
```
TODO: make it work with DataLoader (communicate with main process to know when the node runs out of data ?)
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7854/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7854/timeline
| null | null | 1
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7854.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7854",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7854.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7854"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7853
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7853/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7853/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7853/events
|
https://github.com/huggingface/datasets/pull/7853
| 3,596,232,275
|
PR_kwDODunzps6x7ARa
| 7,853
|
Fix embed storage nifti
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7853). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-06T15:07:58
| 2025-11-06T17:04:57
| 2025-11-06T16:20:36
|
CONTRIBUTOR
| null | null | null | null |
Fixes #7852
Adds `embed_storage` function and allows gzipped files to be loaded correctly from local storage.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7853/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7853/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7853.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7853",
"merged_at": "2025-11-06T16:20:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7853.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7853"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7852
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7852/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7852/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7852/events
|
https://github.com/huggingface/datasets/issues/7852
| 3,595,450,602
|
I_kwDODunzps7WTjjq
| 7,852
|
Problems with NifTI
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"> 2. when uploading via the niftifolder feature, the resulting parquet only contains relative paths to the nifti files:\n\nwhat did you use to upload the dataset ? iirc push_to_hub() does upload the bytes as well, but to_parquet() doesn't",
"> > 2. when uploading via the niftifolder feature, the resulting parquet only contains relative paths to the nifti files:\n> \n> what did you use to upload the dataset ? iirc push_to_hub() does upload the bytes as well, but to_parquet() doesn't\n\nI used `push_to_hub` but the problem is that the nifti feature does not have an `embed_storage` function"
] | 2025-11-06T11:46:33
| 2025-11-06T16:20:38
| 2025-11-06T16:20:38
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Describe the bug
There are currently 2 problems with the new NifTI feature:
1. dealing with zipped files, this is mentioned and explained [here](https://github.com/huggingface/datasets/pull/7815#issuecomment-3496199503)
2. when uploading via the `niftifolder` feature, the resulting parquet only contains relative paths to the nifti files:
```bash
table['nifti']
<pyarrow.lib.ChunkedArray object at 0x798245d37d60>
[
-- is_valid: all not null
-- child 0 type: binary
[
null,
null,
null,
null,
null,
null
]
-- child 1 type: string
[
"/home/tobias/programming/github/datasets/nifti_extracted/T1.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2-interleaved.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2_-interleaved.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/T2_.nii",
"/home/tobias/programming/github/datasets/nifti_extracted/fieldmap.nii"
]
]
```
instead of containing bytes. The code is copy pasted from PDF, so I wonder what is going wrong here.
### Steps to reproduce the bug
see the linked comment
### Expected behavior
downloading should work as smoothly as for pdf
### Environment info
- `datasets` version: 4.4.2.dev0
- Platform: Linux-6.14.0-33-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.35.3
- PyArrow version: 21.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.9.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7852/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7852/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7851
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7851/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7851/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7851/events
|
https://github.com/huggingface/datasets/pull/7851
| 3,592,252,116
|
PR_kwDODunzps6xtvVj
| 7,851
|
Add fasta support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/209551168?v=4",
"events_url": "https://api.github.com/users/georgia-hf/events{/privacy}",
"followers_url": "https://api.github.com/users/georgia-hf/followers",
"following_url": "https://api.github.com/users/georgia-hf/following{/other_user}",
"gists_url": "https://api.github.com/users/georgia-hf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/georgia-hf",
"id": 209551168,
"login": "georgia-hf",
"node_id": "U_kgDODH1_QA",
"organizations_url": "https://api.github.com/users/georgia-hf/orgs",
"received_events_url": "https://api.github.com/users/georgia-hf/received_events",
"repos_url": "https://api.github.com/users/georgia-hf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/georgia-hf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/georgia-hf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/georgia-hf",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7851). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"A few comments:\r\n\r\n- Have you tried using this with longer sequences? @UriNeri developed something similar internally and used it with viral genomes. He got some Parquet errors due to genomes not fitting in a `utf8` column. This was fixed by using `large_utf8`.\r\n- If you're only using it to read FASTA files, I think that having BioPython as a dependency is overkill. The library is very large and the FASTA parser isn't particularly fast. I have an example of a fast parser with no external references [here](https://gist.github.com/apcamargo/d039aa04a2cbbcbb14e2d34a0963b862) (this is actually based on [`readfq.py`](https://github.com/lh3/readfq/blob/master/readfq.py), with a couple of extra functions that might not be useful in the context of this PR)",
"> * If you're only using it to read FASTA files, I think that having BioPython as a dependency is overkill. The library is very large and the FASTA parser isn't particularly fast. I have an example of a fast parser with no external references [here](https://gist.github.com/apcamargo/d039aa04a2cbbcbb14e2d34a0963b862) (this is actually based on [`readfq.py`](https://github.com/lh3/readfq/blob/master/readfq.py), with a couple of extra functions that might not be useful in the context of this PR)\r\n\r\nWhat @apcamargo said, plus FWIW in **our approach** (so might not be relevant here) we use polars (with custom fasta io parser) or polars-bio (that has a `scan_fasta` function) and we foudn out that the page size sometimes need to be adjusted:\r\n```\r\nenvs/default/lib/python3.9/site-packages/polars/lazyframe/frame.py:2422, in LazyFrame.collect(self, type_coercion, predicate_pushdown, projection_pushdown, simplify_expression, slice_pushdown, comm_subplan_elim, comm_subexpr_elim, cluster_with_columns, collapse_joins, no_optimization, engine, background, optimizations, **_kwargs)\r\n 2420 # Only for testing purposes\r\n 2421 callback = _kwargs.get(\"post_opt_callback\", callback)\r\n-> 2422 return wrap_df(ldf.collect(engine, callback))\r\nComputeError: parquet: File out of specification: A page can only contain i32::MAX uncompressed bytes. This one contains 4544943557\r\n```\r\n\r\nWhich in polars can be solved with:\r\n```\r\ndf.write_parquet(\r\n \"test1.patquet\",\r\n compression=\"zstd\",\r\n row_group_size=10_000, # smaller row groups\r\n data_page_size=1024*1024 # 1MB page size\r\n)\r\n```\r\n"
] | 2025-11-05T18:11:12
| 2025-11-15T00:51:53
| null |
NONE
| null | null | null | null |
This PR adds support for FASTA files conversion to Parquet.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7851/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7851/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7851.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7851",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7851.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7851"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7850
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7850/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7850/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7850/events
|
https://github.com/huggingface/datasets/pull/7850
| 3,591,758,675
|
PR_kwDODunzps6xsGi_
| 7,850
|
dev version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7850). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-05T16:02:23
| 2025-11-05T16:05:40
| 2025-11-05T16:02:32
|
MEMBER
| null | null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7850/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7850/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7850.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7850",
"merged_at": "2025-11-05T16:02:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7850.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7850"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7849
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7849/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7849/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7849/events
|
https://github.com/huggingface/datasets/pull/7849
| 3,591,749,675
|
PR_kwDODunzps6xsEm0
| 7,849
|
release: 4.4.1
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7849). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-05T16:00:05
| 2025-11-05T16:03:06
| 2025-11-05T16:00:46
|
MEMBER
| null | null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7849/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7849/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7849.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7849",
"merged_at": "2025-11-05T16:00:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7849.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7849"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7848
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7848/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7848/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7848/events
|
https://github.com/huggingface/datasets/pull/7848
| 3,590,024,849
|
PR_kwDODunzps6xmPYZ
| 7,848
|
DOC: remove mode parameter in docstring of pdf and video feature
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-11-05T09:11:46
| 2025-11-05T14:42:59
| 2025-11-05T14:04:03
|
CONTRIBUTOR
| null | null | null | null |
closes #7841
As mentioned in the issue `mode` has been copy-pasted but isn't used in these files.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7848/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7848/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7848.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7848",
"merged_at": "2025-11-05T14:04:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7848.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7848"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7847
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7847/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7847/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7847/events
|
https://github.com/huggingface/datasets/pull/7847
| 3,586,135,727
|
PR_kwDODunzps6xZZb9
| 7,847
|
Better streaming retries (504 and 429)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7847). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-04T11:23:58
| 2025-11-04T13:52:25
| 2025-11-04T13:52:22
|
MEMBER
| null | null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7847/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7847/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7847.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7847",
"merged_at": "2025-11-04T13:52:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7847.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7847"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7846
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7846/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7846/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7846/events
|
https://github.com/huggingface/datasets/pull/7846
| 3,585,966,335
|
PR_kwDODunzps6xYzny
| 7,846
|
set dev version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7846). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-04T10:44:27
| 2025-11-04T10:49:24
| 2025-11-04T10:44:37
|
MEMBER
| null | null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7846/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7846/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7846.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7846",
"merged_at": "2025-11-04T10:44:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7846.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7846"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7845
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7845/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7845/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7845/events
|
https://github.com/huggingface/datasets/pull/7845
| 3,585,926,647
|
PR_kwDODunzps6xYq2r
| 7,845
|
Release: 4.4.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7845). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-04T10:35:33
| 2025-11-04T10:39:47
| 2025-11-04T10:36:37
|
MEMBER
| null | null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7845/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7845/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7845.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7845",
"merged_at": "2025-11-04T10:36:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7845.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7845"
}
| true
|
End of preview. Expand
in Data Studio
- Downloads last month
- 22