url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
3.08B
| node_id
stringlengths 18
32
| number
int64 1
7.58k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
sequencelengths 0
30
| created_at
timestamp[ns, tz=UTC]date 2020-04-14 10:18:02
2025-05-21 16:37:01
| updated_at
timestamp[ns, tz=UTC]date 2020-04-27 16:04:17
2025-05-21 16:38:27
| closed_at
timestamp[ns, tz=UTC]date 2020-04-14 12:01:40
2025-05-21 13:17:20
⌀ | author_association
stringclasses 4
values | type
float64 | active_lock_reason
float64 | sub_issues_summary
dict | body
stringlengths 0
228k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 4
values | draft
float64 0
1
⌀ | pull_request
dict | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7577
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7577/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7577/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7577/events
|
https://github.com/huggingface/datasets/issues/7577
| 3,080,833,740
|
I_kwDODunzps63ocrM
| 7,577
|
arrow_schema is not compatible with list
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/164412025?v=4",
"events_url": "https://api.github.com/users/jonathanshen-upwork/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanshen-upwork/followers",
"following_url": "https://api.github.com/users/jonathanshen-upwork/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanshen-upwork/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonathanshen-upwork",
"id": 164412025,
"login": "jonathanshen-upwork",
"node_id": "U_kgDOCcy6eQ",
"organizations_url": "https://api.github.com/users/jonathanshen-upwork/orgs",
"received_events_url": "https://api.github.com/users/jonathanshen-upwork/received_events",
"repos_url": "https://api.github.com/users/jonathanshen-upwork/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonathanshen-upwork/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanshen-upwork/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonathanshen-upwork",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-05-21T16:37:01
| 2025-05-21T16:38:27
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
```
import datasets
f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})
f.arrow_schema
Traceback (most recent call last):
File "datasets/features/features.py", line 1826, in arrow_schema
return pa.schema(self.type).with_metadata({"huggingface": json.dumps(hf_metadata)})
^^^^^^^^^
File "datasets/features/features.py", line 1815, in type
return get_nested_type(self)
^^^^^^^^^^^^^^^^^^^^^
File "datasets/features/features.py", line 1252, in get_nested_type
return pa.struct(
^^^^^^^^^^
File "pyarrow/types.pxi", line 5406, in pyarrow.lib.struct
File "pyarrow/types.pxi", line 3890, in pyarrow.lib.field
File "pyarrow/types.pxi", line 5918, in pyarrow.lib.ensure_type
TypeError: DataType expected, got <class 'list'>
```
The following works
```
f = datasets.Features({'x': datasets.LargeList(datasets.Value(dtype='int32'))})
```
### Expected behavior
according to https://github.com/huggingface/datasets/blob/458f45a22c3cc9aea5f442f6f519333dcfeae9b9/src/datasets/features/features.py#L1765 python list should be a valid type specification for features
### Environment info
- `datasets` version: 3.5.1
- Platform: macOS-15.5-arm64-arm-64bit
- Python version: 3.12.9
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7577/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7577/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7576
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7576/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7576/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7576/events
|
https://github.com/huggingface/datasets/pull/7576
| 3,080,450,538
|
PR_kwDODunzps6XEuMz
| 7,576
|
Fix regex library warnings
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/35470921?v=4",
"events_url": "https://api.github.com/users/emmanuel-ferdman/events{/privacy}",
"followers_url": "https://api.github.com/users/emmanuel-ferdman/followers",
"following_url": "https://api.github.com/users/emmanuel-ferdman/following{/other_user}",
"gists_url": "https://api.github.com/users/emmanuel-ferdman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emmanuel-ferdman",
"id": 35470921,
"login": "emmanuel-ferdman",
"node_id": "MDQ6VXNlcjM1NDcwOTIx",
"organizations_url": "https://api.github.com/users/emmanuel-ferdman/orgs",
"received_events_url": "https://api.github.com/users/emmanuel-ferdman/received_events",
"repos_url": "https://api.github.com/users/emmanuel-ferdman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emmanuel-ferdman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emmanuel-ferdman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emmanuel-ferdman",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-05-21T14:31:58
| 2025-05-21T14:31:58
| null |
NONE
| null | null | null |
# PR Summary
This small PR resolves the regex library warnings showing starting Python3.11:
```python
DeprecationWarning: 'count' is passed as positional argument
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7576/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7576/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7576.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7576",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7576.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7576"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7575
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7575/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7575/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7575/events
|
https://github.com/huggingface/datasets/pull/7575
| 3,080,228,718
|
PR_kwDODunzps6XD9gM
| 7,575
|
[MINOR:TYPO] Update save_to_disk docstring
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cakiki",
"id": 3664563,
"login": "cakiki",
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"repos_url": "https://api.github.com/users/cakiki/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cakiki",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-05-21T13:22:24
| 2025-05-21T13:22:24
| null |
CONTRIBUTOR
| null | null | null |
r/hub/filesystem in save_to_disk
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7575/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7575/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7575.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7575",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7575.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7575"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7574
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7574/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7574/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7574/events
|
https://github.com/huggingface/datasets/issues/7574
| 3,079,641,072
|
I_kwDODunzps63j5fw
| 7,574
|
Missing multilingual directions in IWSLT2017 dataset's processing script
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/79297451?v=4",
"events_url": "https://api.github.com/users/andy-joy-25/events{/privacy}",
"followers_url": "https://api.github.com/users/andy-joy-25/followers",
"following_url": "https://api.github.com/users/andy-joy-25/following{/other_user}",
"gists_url": "https://api.github.com/users/andy-joy-25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andy-joy-25",
"id": 79297451,
"login": "andy-joy-25",
"node_id": "MDQ6VXNlcjc5Mjk3NDUx",
"organizations_url": "https://api.github.com/users/andy-joy-25/orgs",
"received_events_url": "https://api.github.com/users/andy-joy-25/received_events",
"repos_url": "https://api.github.com/users/andy-joy-25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andy-joy-25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andy-joy-25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andy-joy-25",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"I have opened 2 PRs on the Hub: `https://huggingface.co/datasets/IWSLT/iwslt2017/discussions/7` and `https://huggingface.co/datasets/IWSLT/iwslt2017/discussions/8` to resolve this issue"
] | 2025-05-21T09:53:17
| 2025-05-21T10:14:07
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Hi,
Upon using `iwslt2017.py` in `IWSLT/iwslt2017` on the Hub for loading the datasets, I am unable to obtain the datasets for the language pairs `de-it`, `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` using it. These 6 pairs do not show up when using `get_dataset_config_names()` to obtain the list of all the configs present in `IWSLT/iwslt2017`. This should not be the case since as mentioned in their original paper (please see https://aclanthology.org/2017.iwslt-1.1.pdf), the authors specify that "_this year we proposed the multilingual translation between any pair of languages from {Dutch, English, German, Italian, Romanian}..._" and because these datasets are indeed present in `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip`.
Best Regards,
Anand
### Steps to reproduce the bug
Check the output of `get_dataset_config_names("IWSLT/iwslt2017", trust_remote_code=True)`: only 24 language pairs are present and the following 6 config names are absent: `iwslt2017-de-it`, `iwslt2017-de-ro`, `iwslt2017-de-nl`, `iwslt2017-it-de`, `iwslt2017-nl-de`, and `iwslt2017-ro-de`.
### Expected behavior
The aforementioned 6 language pairs should also be present and hence, all these 6 language pairs' IWSLT2017 datasets must also be available for further use.
I would suggest removing `de` from the `BI_LANGUAGES` list and moving it over to the `MULTI_LANGUAGES` list instead in `iwslt2017.py` to account for all the 6 missing language pairs (the same `de-en` dataset is present in both `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip` and `data/2017-01-trnted/texts/de/en/de-en.zip` but the `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` datasets are only present in `data/2017-01-trnmted/texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.zip`: so, its unclear why the following comment: _`# XXX: Artificially removed DE from here, as it also exists within bilingual data`_ has been added as `L71` in `iwslt2017.py`). The `README.md` file in `IWSLT/iwslt2017`must then be re-created using `datasets-cli test path/to/iwslt2017.py --save_info --all_configs` to pass all split size verification checks for the 6 new language pairs which were previously non-existent.
### Environment info
- `datasets` version: 3.5.0
- Platform: Linux-6.8.0-56-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.30.1
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7574/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7574/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7573
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7573/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7573/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7573/events
|
https://github.com/huggingface/datasets/issues/7573
| 3,076,415,382
|
I_kwDODunzps63Xl-W
| 7,573
|
No Samsum dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17688220?v=4",
"events_url": "https://api.github.com/users/IgorKasianenko/events{/privacy}",
"followers_url": "https://api.github.com/users/IgorKasianenko/followers",
"following_url": "https://api.github.com/users/IgorKasianenko/following{/other_user}",
"gists_url": "https://api.github.com/users/IgorKasianenko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/IgorKasianenko",
"id": 17688220,
"login": "IgorKasianenko",
"node_id": "MDQ6VXNlcjE3Njg4MjIw",
"organizations_url": "https://api.github.com/users/IgorKasianenko/orgs",
"received_events_url": "https://api.github.com/users/IgorKasianenko/received_events",
"repos_url": "https://api.github.com/users/IgorKasianenko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/IgorKasianenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IgorKasianenko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/IgorKasianenko",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-05-20T09:54:35
| 2025-05-20T09:54:35
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
https://huggingface.co/datasets/Samsung/samsum dataset not found error 404
Originated from https://github.com/meta-llama/llama-cookbook/issues/948
### Steps to reproduce the bug
go to website https://huggingface.co/datasets/Samsung/samsum
see the error
also downloading it with python throws
```
Couldn't find 'Samsung/samsum' on the Hugging Face Hub either: FileNotFoundError: Samsung/samsum@f00baf5a7d4abfec6820415493bcb52c587788e6/samsum.py (repository not found)
```
### Expected behavior
Dataset exists
### Environment info
```
- `datasets` version: 3.2.0
- Platform: macOS-15.4.1-arm64-arm-64bit
- Python version: 3.12.2
- `huggingface_hub` version: 0.26.5
- PyArrow version: 16.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7573/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7573/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7572
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7572/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7572/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7572/events
|
https://github.com/huggingface/datasets/pull/7572
| 3,074,529,251
|
PR_kwDODunzps6WwsZB
| 7,572
|
Fixed typos
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47208659?v=4",
"events_url": "https://api.github.com/users/TopCoder2K/events{/privacy}",
"followers_url": "https://api.github.com/users/TopCoder2K/followers",
"following_url": "https://api.github.com/users/TopCoder2K/following{/other_user}",
"gists_url": "https://api.github.com/users/TopCoder2K/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TopCoder2K",
"id": 47208659,
"login": "TopCoder2K",
"node_id": "MDQ6VXNlcjQ3MjA4NjU5",
"organizations_url": "https://api.github.com/users/TopCoder2K/orgs",
"received_events_url": "https://api.github.com/users/TopCoder2K/received_events",
"repos_url": "https://api.github.com/users/TopCoder2K/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TopCoder2K/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TopCoder2K/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TopCoder2K",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-05-19T17:16:59
| 2025-05-19T17:16:59
| null |
CONTRIBUTOR
| null | null | null |
More info: [comment](https://github.com/huggingface/datasets/pull/7564#issuecomment-2863391781).
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7572/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7572/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7572.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7572",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7572.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7572"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7571
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7571/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7571/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7571/events
|
https://github.com/huggingface/datasets/pull/7571
| 3,074,116,942
|
PR_kwDODunzps6WvRqi
| 7,571
|
fix string_to_dict test
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7571). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-19T14:49:23
| 2025-05-19T14:52:24
| 2025-05-19T14:49:28
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7571/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7571/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7571.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7571",
"merged_at": "2025-05-19T14:49:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7571.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7571"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7570
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7570/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7570/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7570/events
|
https://github.com/huggingface/datasets/issues/7570
| 3,065,966,529
|
I_kwDODunzps62vu_B
| 7,570
|
Dataset lib seems to broke after fssec lib update
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/81933585?v=4",
"events_url": "https://api.github.com/users/sleepingcat4/events{/privacy}",
"followers_url": "https://api.github.com/users/sleepingcat4/followers",
"following_url": "https://api.github.com/users/sleepingcat4/following{/other_user}",
"gists_url": "https://api.github.com/users/sleepingcat4/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sleepingcat4",
"id": 81933585,
"login": "sleepingcat4",
"node_id": "MDQ6VXNlcjgxOTMzNTg1",
"organizations_url": "https://api.github.com/users/sleepingcat4/orgs",
"received_events_url": "https://api.github.com/users/sleepingcat4/received_events",
"repos_url": "https://api.github.com/users/sleepingcat4/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sleepingcat4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sleepingcat4/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sleepingcat4",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-05-15T11:45:06
| 2025-05-15T11:45:06
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I am facing an issue since today where HF's dataset is acting weird and in some instances failure to recognise a valid dataset entirely, I think it is happening due to recent change in `fsspec` lib as using this command fixed it for me in one-time: `!pip install -U datasets huggingface_hub fsspec`
### Steps to reproduce the bug
from datasets import load_dataset
def download_hf():
dataset_name = input("Enter the dataset name: ")
subset_name = input("Enter subset name: ")
ds = load_dataset(dataset_name, name=subset_name)
for split in ds:
ds[split].to_pandas().to_csv(f"{subset_name}.csv", index=False)
download_hf()
### Expected behavior
```
Downloading readme: 100%
1.55k/1.55k [00:00<00:00, 121kB/s]
Downloading data files: 100%
1/1 [00:00<00:00, 2.06it/s]
Downloading data: 0%| | 0.00/54.2k [00:00<?, ?B/s]
Downloading data: 100%|██████████| 54.2k/54.2k [00:00<00:00, 121kB/s]
Extracting data files: 100%
1/1 [00:00<00:00, 35.17it/s]
Generating test split:
140/0 [00:00<00:00, 2628.62 examples/s]
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
[<ipython-input-2-12ab305b0e77>](https://localhost:8080/#) in <cell line: 0>()
8 ds[split].to_pandas().to_csv(f"{subset_name}.csv", index=False)
9
---> 10 download_hf()
2 frames
[/usr/local/lib/python3.11/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory)
1171 is_local = not is_remote_filesystem(self._fs)
1172 if not is_local:
-> 1173 raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.")
1174 if not os.path.exists(self._output_dir):
1175 raise FileNotFoundError(
NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported.
```
OR
```
Traceback (most recent call last):
File "e:\Fuck\download-data\mcq_dataset.py", line 10, in <module>
download_hf()
File "e:\Fuck\download-data\mcq_dataset.py", line 6, in download_hf
ds = load_dataset(dataset_name, name=subset_name)
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 2606, in load_dataset
builder_instance = load_dataset_builder(
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 2277, in load_dataset_builder
dataset_module = dataset_module_factory(
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 1917, in dataset_module_factory
raise e1 from None
File "C:\Users\DELL\AppData\Local\Programs\Python\Python310\lib\site-packages\datasets\load.py", line 1867, in dataset_module_factory
raise DatasetNotFoundError(f"Dataset '{path}' doesn't exist on the Hub or cannot be accessed.") from e
datasets.exceptions.DatasetNotFoundError: Dataset 'dataset repo_id' doesn't exist on the Hub or cannot be accessed.
```
### Environment info
colab and 3.10 local system
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7570/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7570/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7569
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7569/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7569/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7569/events
|
https://github.com/huggingface/datasets/issues/7569
| 3,061,234,054
|
I_kwDODunzps62drmG
| 7,569
|
Dataset creation is broken if nesting a dict inside a dict inside a list
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/25732590?v=4",
"events_url": "https://api.github.com/users/TimSchneider42/events{/privacy}",
"followers_url": "https://api.github.com/users/TimSchneider42/followers",
"following_url": "https://api.github.com/users/TimSchneider42/following{/other_user}",
"gists_url": "https://api.github.com/users/TimSchneider42/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TimSchneider42",
"id": 25732590,
"login": "TimSchneider42",
"node_id": "MDQ6VXNlcjI1NzMyNTkw",
"organizations_url": "https://api.github.com/users/TimSchneider42/orgs",
"received_events_url": "https://api.github.com/users/TimSchneider42/received_events",
"repos_url": "https://api.github.com/users/TimSchneider42/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TimSchneider42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TimSchneider42/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TimSchneider42",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! That's because Séquence is a type that comes from tensorflow datasets and inverts lists and focus when doing Séquence(dict).\n\nInstead you should use a list. In your case\n```python\nfeatures = Features({\n \"a\": [{\"b\": {\"c\": Value(\"string\")}}]\n})\n```",
"Hi,\n\nThanks for the swift reply! Could you quickly clarify a couple of points?\n\n1. Is there any benefit in using Sequence over normal lists? Especially for longer lists (in my case, up to 256 entries)\n2. When exactly can I use Sequence? If there is a maximum of one level of dictionaries inside, then it's always fine?\n3. When creating the data in the generator, do I need to swap lists and dicts manually, or does that happen automatically?\n\nAlso, the documentation does not seem to mention this limitation of the Sequence type anywhere and encourages users to use it [here](https://huggingface.co/docs/datasets/en/about_dataset_features). In fact, I did not even know that just using a Python list was an option. Maybe the documentation can be improved to mention the limitations of Sequence and highlight that lists can be used instead.\n\nThanks a lot in advance!\n\nBest,\nTim"
] | 2025-05-13T21:06:45
| 2025-05-20T19:25:15
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Hey,
I noticed that the creation of datasets with `Dataset.from_generator` is broken if dicts and lists are nested in a certain way and a schema is being passed. See below for details.
Best,
Tim
### Steps to reproduce the bug
Runing this code:
```python
from datasets import Dataset, Features, Sequence, Value
def generator():
yield {
"a": [{"b": {"c": 0}}],
}
features = Features(
{
"a": Sequence(
feature={
"b": {
"c": Value("int32"),
},
},
length=1,
)
}
)
dataset = Dataset.from_generator(generator, features=features)
```
leads to
```
Generating train split: 1 examples [00:00, 540.85 examples/s]
Traceback (most recent call last):
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1635, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 657, in finalize
self.write_examples_on_file()
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 510, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_writer.py", line 629, in write_batch
pa_table = pa.Table.from_arrays(arrays, schema=schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 4851, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1608, in pyarrow.lib._sanitize_arrays
File "pyarrow/array.pxi", line 399, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 1004, in pyarrow.lib.Array.cast
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/pyarrow/compute.py", line 405, in cast
return call_function("cast", [arr], options, memory_pool)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_compute.pyx", line 598, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 393, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from fixed_size_list<item: struct<c: int32>>[1] to struct using function cast_struct
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/user/test/tools/hf_test2.py", line 23, in <module>
dataset = Dataset.from_generator(generator, features=features)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 1114, in from_generator
).read()
^^^^^^
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/io/generator.py", line 49, in read
self.builder.download_and_prepare(
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 925, in download_and_prepare
self._download_and_prepare(
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1649, in _download_and_prepare
super()._download_and_prepare(
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1001, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1487, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/user/miniconda3/envs/test/lib/python3.11/site-packages/datasets/builder.py", line 1644, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
Process finished with exit code 1
```
### Expected behavior
I expected this code not to lead to an error.
I have done some digging and figured out that the problem seems to be the `get_nested_type` function in `features.py`, which, for whatever reason, flips Sequences and dicts whenever it encounters a dict inside of a sequence. This seems to be necessary, as disabling that flip leads to another error. However, by keeping that flip enabled for the highest level and disabling it for all subsequent levels, I was able to work around this problem. Specifically, by patching `get_nested_type` as follows, it works on the given example (emphasis on the `level` parameter I added):
```python
def get_nested_type(schema: FeatureType, level=0) -> pa.DataType:
"""
get_nested_type() converts a datasets.FeatureType into a pyarrow.DataType, and acts as the inverse of
generate_from_arrow_type().
It performs double-duty as the implementation of Features.type and handles the conversion of
datasets.Feature->pa.struct
"""
# Nested structures: we allow dict, list/tuples, sequences
if isinstance(schema, Features):
return pa.struct(
{key: get_nested_type(schema[key], level = level + 1) for key in schema}
) # Features is subclass of dict, and dict order is deterministic since Python 3.6
elif isinstance(schema, dict):
return pa.struct(
{key: get_nested_type(schema[key], level = level + 1) for key in schema}
) # however don't sort on struct types since the order matters
elif isinstance(schema, (list, tuple)):
if len(schema) != 1:
raise ValueError("When defining list feature, you should just provide one example of the inner type")
value_type = get_nested_type(schema[0], level = level + 1)
return pa.list_(value_type)
elif isinstance(schema, LargeList):
value_type = get_nested_type(schema.feature, level = level + 1)
return pa.large_list(value_type)
elif isinstance(schema, Sequence):
value_type = get_nested_type(schema.feature, level = level + 1)
# We allow to reverse list of dict => dict of list for compatibility with tfds
if isinstance(schema.feature, dict) and level == 1:
data_type = pa.struct({f.name: pa.list_(f.type, schema.length) for f in value_type})
else:
data_type = pa.list_(value_type, schema.length)
return data_type
# Other objects are callable which returns their data type (ClassLabel, Array2D, Translation, Arrow datatype creation methods)
return schema()
```
I have honestly no idea what I am doing here, so this might produce other issues for different inputs.
### Environment info
- `datasets` version: 3.6.0
- Platform: Linux-6.8.0-59-generic-x86_64-with-glibc2.35
- Python version: 3.11.11
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
Also tested it with 3.5.0, same result.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7569/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7569/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7568
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7568/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7568/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7568/events
|
https://github.com/huggingface/datasets/issues/7568
| 3,060,515,257
|
I_kwDODunzps62a8G5
| 7,568
|
`IterableDatasetDict.map()` call removes `column_names` (in fact info.features)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/7893763?v=4",
"events_url": "https://api.github.com/users/mombip/events{/privacy}",
"followers_url": "https://api.github.com/users/mombip/followers",
"following_url": "https://api.github.com/users/mombip/following{/other_user}",
"gists_url": "https://api.github.com/users/mombip/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mombip",
"id": 7893763,
"login": "mombip",
"node_id": "MDQ6VXNlcjc4OTM3NjM=",
"organizations_url": "https://api.github.com/users/mombip/orgs",
"received_events_url": "https://api.github.com/users/mombip/received_events",
"repos_url": "https://api.github.com/users/mombip/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mombip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mombip/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mombip",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! IterableDataset doesn't know what's the output of the function you pass to map(), so it's not possible to know in advance the features of the output dataset.\n\nThere is a workaround though: either do `ds = ds.map(..., features=features)`, or you can do `ds = ds._resolve_features()` which iterates on the first rows to infer the dataset features.",
"Thank you. I understand that “IterableDataset doesn't know what's the output of the function”—that’s true, but:\n\nUnfortunately, the workaround you proposed **doesn’t solve** the problem. `ds.map()` is called multiple times by third-party code (i.e. `SFTTrainer`). To apply your approach, I would have to modify external library code. That’s why I decided to patch the _class_ rather than update `dataset` _objects_ (in fact, updating the object after `map()` was my initial approach, but then I realized I’m not the only one mapping an already-mapped dataset.)\n\nAs a user, I expected that after mapping I would get a new dataset with the correct column names. If, for some reason, that can’t be the default behavior, I would expect an argument—i.e. `auto_resolve_features: bool = False` — to control how my dataset is mapped if following mapping operation are called.\n\nIt’s also problematic that `column_names` are tied to `features`, which is even more confusing and forces you to inspect the source code to understand what’s going on.\n\n**New version of workaround:**\n```python\ndef patch_iterable_dataset_map():\n _orig_map = IterableDataset.map\n\n def _patched_map(self, *args, **kwargs):\n ds = _orig_map(self, *args, **kwargs)\n return ds._resolve_features()\n\n IterableDataset.map = _patched_map\n```",
"I see, maybe `.resolve_features()` should be called by default in this case in the SFTTrainer ? (or pass `features=` if the data processing always output the same features)\n\nWe can even support a new parameter `features=\"infer\"` if it would be comfortable to not use internal methods in SFTTrainer",
"I think most straightforward solution would be to reinitialize `features` from data after mapping if `feature` argument is not passed. I hink it is more intuitive behavior than just cleaning features. There is also problem in usage `.resolve_features()` in this context. I observed that it leads to `_head()` method execution and it then causes that 5 batches from dataset are iterated (`_head()` defaults to 5 batches). \nI'm not sure how it influences whole process. Are those 5 batches (in my case it's 5000 rows) used only to find `features`. Does final training/eval process \"see\" this items? How it affects IterableDataset state (current position)?",
"I checked the source code and while it indeed iterates on the first 5 rows. As a normal iteration, it does record the state in case you call `.state_dict()`, but it doesn't change the starting state. The starting state is always the beginning of the dataset, unless it is explicitly set with `.load_state_dict()`. To be clear, if you iterate on the dataset after `._resolve_features()`, it will start from the beginning of the dataset (or from a state you manually pass using `.load_state_dict()`)"
] | 2025-05-13T15:45:42
| 2025-05-19T12:09:48
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
When calling `IterableDatasetDict.map()`, each split’s `IterableDataset.map()` is invoked without a `features` argument. While omitting the argument isn’t itself incorrect, the implementation then sets `info.features = features`, which destroys the original `features` content. Since `IterableDataset.column_names` relies on `info.features`, it ends up broken (`None`).
**Reproduction**
1. Define an IterableDatasetDict with a non-None features schema.
2. my_iterable_dataset_dict contains "text" column.
3. Call:
```Python
new_dict = my_iterable_dataset_dict.map(
function=my_fn,
with_indices=False,
batched=True,
batch_size=16,
)
```
4. Observe
```Python
new_dict["train"].info.features # {'text': Value(dtype='string', id=None)}
new_dict["train"].column_names # ['text']
```
5. Call:
```Python
new_dict = my_iterable_dataset_dict.map(
function=my_fn,
with_indices=False,
batched=True,
batch_size=16,
remove_columns=["foo"]
)
```
6. Observe:
```Python
new_dict["train"].info.features # → None
new_dict["train"].column_names # → None
```
5. Internally, in dataset_dict.py this loop omits features ([code](https://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/dataset_dict.py#L2047C5-L2056C14)):
```Python
for split, dataset in self.items():
dataset_dict[split] = dataset.map(
function=function,
with_indices=with_indices,
input_columns=input_columns,
batched=batched,
batch_size=batch_size,
drop_last_batch=drop_last_batch,
remove_columns=remove_columns,
fn_kwargs=fn_kwargs,
# features omitted → defaults to None
)
```
7. Then inside IterableDataset.map() ([code](https://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/iterable_dataset.py#L2619C1-L2622C37)) correct `info.features` is replaced by features which is None:
```Python
info = self.info.copy()
info.features = features # features is None here
return IterableDataset(..., info=info, ...)
```
**Suggestion**
It looks like this replacement was added intentionally but maybe should be done only if `features` is `not None`.
**Workarround:**
`SFTTrainer` calls `dataset.map()` several times and then fails on `NoneType` when iterating `dataset.column_names`.
I decided to write this patch - works form me.
```python
def patch_iterable_dataset_map():
_orig_map = IterableDataset.map
def _patched_map(self, *args, **kwargs):
if "features" not in kwargs or kwargs["features"] is None:
kwargs["features"] = self.info.features
return _orig_map(self, *args, **kwargs)
IterableDataset.map = _patched_map
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7568/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7568/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7567
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7567/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7567/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7567/events
|
https://github.com/huggingface/datasets/issues/7567
| 3,058,308,538
|
I_kwDODunzps62ShW6
| 7,567
|
interleave_datasets seed with multiple workers
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonathanasdf",
"id": 511073,
"login": "jonathanasdf",
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonathanasdf",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi ! It's already the case IIRC: the effective seed looks like `seed + worker_id`. Do you have a reproducible example ?",
"here is an example with shuffle\n\n```\nimport itertools\nimport datasets\nimport multiprocessing\nimport torch.utils.data\n\n\ndef gen(shard):\n worker_info = torch.utils.data.get_worker_info()\n for i in range(10):\n yield {'value': i, 'worker_id': worker_info.id}\n\n\ndef main():\n ds = datasets.IterableDataset.from_generator(gen, gen_kwargs={'shard': list(range(8))})\n ds = ds.shuffle(buffer_size=100, seed=1234)\n dataloader = torch.utils.data.DataLoader(ds, batch_size=None, num_workers=8)\n for i, ex in enumerate(itertools.islice(dataloader, 50)):\n print(i, ex)\n\n\nif __name__ == '__main__':\n multiprocessing.set_start_method('spawn')\n main()\n```\n\n```\npython test.py\n0 {'value': 8, 'worker_id': 0}\n1 {'value': 8, 'worker_id': 1}\n2 {'value': 8, 'worker_id': 2}\n3 {'value': 8, 'worker_id': 3}\n4 {'value': 8, 'worker_id': 4}\n5 {'value': 8, 'worker_id': 5}\n6 {'value': 8, 'worker_id': 6}\n7 {'value': 8, 'worker_id': 7}\n8 {'value': 9, 'worker_id': 0}\n9 {'value': 9, 'worker_id': 1}\n10 {'value': 9, 'worker_id': 2}\n11 {'value': 9, 'worker_id': 3}\n12 {'value': 9, 'worker_id': 4}\n13 {'value': 9, 'worker_id': 5}\n14 {'value': 9, 'worker_id': 6}\n15 {'value': 9, 'worker_id': 7}\n16 {'value': 5, 'worker_id': 0}\n17 {'value': 5, 'worker_id': 1}\n18 {'value': 5, 'worker_id': 2}\n19 {'value': 5, 'worker_id': 3}\n```",
"With `interleave_datasets`\n\n```\nimport itertools\nimport datasets\nimport multiprocessing\nimport torch.utils.data\n\n\ndef gen(shard, value):\n while True:\n yield {'value': value}\n\n\ndef main():\n ds = [\n datasets.IterableDataset.from_generator(gen, gen_kwargs={'shard': list(range(8)), 'value': i})\n for i in range(10)\n ]\n ds = datasets.interleave_datasets(ds, probabilities=[1 / len(ds)] * len(ds), seed=1234)\n dataloader = torch.utils.data.DataLoader(ds, batch_size=None, num_workers=8)\n for i, ex in enumerate(itertools.islice(dataloader, 50)):\n print(i, ex)\n\n\nif __name__ == '__main__':\n multiprocessing.set_start_method('spawn')\n main()\n```\n\n```\npython test.py\n0 {'value': 9}\n1 {'value': 9}\n2 {'value': 9}\n3 {'value': 9}\n4 {'value': 9}\n5 {'value': 9}\n6 {'value': 9}\n7 {'value': 9}\n8 {'value': 3}\n9 {'value': 3}\n10 {'value': 3}\n11 {'value': 3}\n12 {'value': 3}\n13 {'value': 3}\n14 {'value': 3}\n15 {'value': 3}\n16 {'value': 9}\n17 {'value': 9}\n18 {'value': 9}\n19 {'value': 9}\n20 {'value': 9}\n21 {'value': 9}\n22 {'value': 9}\n23 {'value': 9}\n```",
"Same results after updating to datasets 3.6.0.",
"Ah my bad, `shuffle()` uses a global effective seed which is something like `seed + epoch`, which is used to do the same shards shuffle in each worker so that each worker have a non-overlapping set of shards:\n\nhttps://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/iterable_dataset.py#L2102-L2111\n\nI think we should take into account the `worker_id` in a local seed for the buffer right after this line:\n\nhttps://github.com/huggingface/datasets/blob/b9efdc64c3bfb8f21f8a4a22b21bddd31ecd5a31/src/datasets/iterable_dataset.py#L2151-L2153\n\nlike adding a new step that would propagate in the examples iterables or something like that:\n\n```python\nex_iterable = ex_iterable.shift_rngs(value=worker_id)\n```\n\nis this something you'd like to explore ? contributions on this subject are very welcome",
"Potentially, but busy. If anyone wants to take this up please feel free to, otherwise I may or may not revisit when I have free time.\n\nFor what it's worth I got around this with\n\n```\n\nclass SeedGeneratorWithWorkerIterable(iterable_dataset._BaseExamplesIterable):\n \"\"\"ExamplesIterable that seeds the rng with worker id.\"\"\"\n\n def __init__(\n self,\n ex_iterable: iterable_dataset._BaseExamplesIterable,\n generator: np.random.Generator,\n rank: int = 0,\n ):\n \"\"\"Constructor.\"\"\"\n super().__init__()\n self.ex_iterable = ex_iterable\n self.generator = generator\n self.rank = rank\n\n def _init_state_dict(self) -> dict:\n self._state_dict = self.ex_iterable._init_state_dict()\n return self._state_dict\n\n def __iter__(self):\n \"\"\"Data iterator.\"\"\"\n effective_seed = copy.deepcopy(self.generator).integers(0, 1 << 63) - self.rank\n effective_seed = (1 << 63) + effective_seed if effective_seed < 0 else effective_seed\n generator = np.random.default_rng(effective_seed)\n self.ex_iterable = self.ex_iterable.shuffle_data_sources(generator)\n if self._state_dict:\n self._state_dict = self.ex_iterable._init_state_dict()\n yield from iter(self.ex_iterable)\n\n def shuffle_data_sources(self, generator):\n \"\"\"Shuffle data sources.\"\"\"\n ex_iterable = self.ex_iterable.shuffle_data_sources(generator)\n return SeedGeneratorWithWorkerIterable(ex_iterable, generator=generator, rank=self.rank)\n\n def shard_data_sources(self, num_shards: int, index: int, contiguous=True): # noqa: FBT002\n \"\"\"Shard data sources.\"\"\"\n ex_iterable = self.ex_iterable.shard_data_sources(num_shards, index, contiguous=contiguous)\n return SeedGeneratorWithWorkerIterable(ex_iterable, generator=self.generator, rank=index)\n\n @property\n def is_typed(self):\n return self.ex_iterable.is_typed\n\n @property\n def features(self):\n return self.ex_iterable.features\n\n @property\n def num_shards(self) -> int:\n \"\"\"Number of shards.\"\"\"\n return self.ex_iterable.num_shards\n```"
] | 2025-05-12T22:38:27
| 2025-05-15T20:39:37
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Using interleave_datasets with multiple dataloader workers and a seed set causes the same dataset sampling order across all workers.
Should the seed be modulated with the worker id?
### Steps to reproduce the bug
See above
### Expected behavior
See above
### Environment info
- `datasets` version: 3.5.1
- Platform: macOS-15.4.1-arm64-arm-64bit
- Python version: 3.12.9
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7567/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7567/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7566
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7566/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7566/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7566/events
|
https://github.com/huggingface/datasets/issues/7566
| 3,055,279,344
|
I_kwDODunzps62G9zw
| 7,566
|
terminate called without an active exception; Aborted (core dumped)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/18581488?v=4",
"events_url": "https://api.github.com/users/alexey-milovidov/events{/privacy}",
"followers_url": "https://api.github.com/users/alexey-milovidov/followers",
"following_url": "https://api.github.com/users/alexey-milovidov/following{/other_user}",
"gists_url": "https://api.github.com/users/alexey-milovidov/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alexey-milovidov",
"id": 18581488,
"login": "alexey-milovidov",
"node_id": "MDQ6VXNlcjE4NTgxNDg4",
"organizations_url": "https://api.github.com/users/alexey-milovidov/orgs",
"received_events_url": "https://api.github.com/users/alexey-milovidov/received_events",
"repos_url": "https://api.github.com/users/alexey-milovidov/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alexey-milovidov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexey-milovidov/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alexey-milovidov",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-05-11T23:05:54
| 2025-05-11T23:05:54
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
I use it as in the tutorial here: https://huggingface.co/docs/datasets/stream, and it ends up with abort.
### Steps to reproduce the bug
1. `pip install datasets`
2.
```
$ cat main.py
#!/usr/bin/env python3
from datasets import load_dataset
dataset = load_dataset('HuggingFaceFW/fineweb', split='train', streaming=True)
print(next(iter(dataset)))
```
3. `chmod +x main.py`
```
$ ./main.py
README.md: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 43.1k/43.1k [00:00<00:00, 7.04MB/s]
Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25868/25868 [00:05<00:00, 4859.26it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25868/25868 [00:00<00:00, 54773.56it/s]
{'text': "How AP reported in all formats from tornado-stricken regionsMarch 8, 2012\nWhen the first serious bout of tornadoes of 2012 blew through middle America in the middle of the night, they touched down in places hours from any AP bureau. Our closest video journalist was Chicago-based Robert Ray, who dropped his plans to travel to Georgia for Super Tuesday, booked several flights to the cities closest to the strikes and headed for the airport. He’d decide once there which flight to take.\nHe never got on board a plane. Instead, he ended up driving toward Harrisburg, Ill., where initial reports suggested a town was destroyed. That decision turned out to be a lucky break for the AP. Twice.\nRay was among the first journalists to arrive and he confirmed those reports -- in all formats. He shot powerful video, put victims on the phone with AP Radio and played back sound to an editor who transcribed the interviews and put the material on text wires. He then walked around the devastation with the Central Regional Desk on the line, talking to victims with the phone held so close that editors could transcribe his interviews in real time.\nRay also made a dramatic image of a young girl who found a man’s prosthetic leg in the rubble, propped it up next to her destroyed home and spray-painted an impromptu sign: “Found leg. Seriously.”\nThe following day, he was back on the road and headed for Georgia and a Super Tuesday date with Newt Gingrich’s campaign. The drive would take him through a stretch of the South that forecasters expected would suffer another wave of tornadoes.\nTo prevent running into THAT storm, Ray used his iPhone to monitor Doppler radar, zooming in on extreme cells and using Google maps to direct himself to safe routes. And then the journalist took over again.\n“When weather like that occurs, a reporter must seize the opportunity to get the news out and allow people to see, hear and read the power of nature so that they can take proper shelter,” Ray says.\nSo Ray now started to use his phone to follow the storms. He attached a small GoPro camera to his steering wheel in case a tornado dropped down in front of the car somewhere, and took video of heavy rain and hail with his iPhone. Soon, he spotted a tornado and the chase was on. He followed an unmarked emergency vehicle to Cleveland, Tenn., where he was first on the scene of the storm's aftermath.\nAgain, the tornadoes had struck in locations that were hours from the nearest AP bureau. Damage and debris, as well as a wickedly violent storm that made travel dangerous, slowed our efforts to get to the news. That wasn’t a problem in Tennessee, where our customers were well served by an all-formats report that included this text story.\n“CLEVELAND, Tenn. (AP) _ Fierce wind, hail and rain lashed Tennessee for the second time in three days, and at least 15 people were hospitalized Friday in the Chattanooga area.”\nThe byline? Robert Ray.\nFor being adept with technology, chasing after news as it literally dropped from the sky and setting a standard for all-formats reporting that put the AP ahead on the most competitive news story of the day, Ray wins this week’s $300 Best of the States prize.\n© 2013 The Associated Press. All rights reserved. Terms and conditions apply. See AP.org for details.", 'id': '<urn:uuid:d66bc6fe-8477-4adf-b430-f6a558ccc8ff>', 'dump': 'CC-MAIN-2013-20', 'url': 'http://%20jwashington@ap.org/Content/Press-Release/2012/How-AP-reported-in-all-formats-from-tornado-stricken-regions', 'date': '2013-05-18T05:48:54Z', 'file_path': 's3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696381249/warc/CC-MAIN-20130516092621-00000-ip-10-60-113-184.ec2.internal.warc.gz', 'language': 'en', 'language_score': 0.9721424579620361, 'token_count': 717}
terminate called without an active exception
Aborted (core dumped)
```
### Expected behavior
I'm not a proficient Python user, so it might be my own error, but even in that case, the error message should be better.
### Environment info
`Successfully installed datasets-3.6.0 dill-0.3.8 hf-xet-1.1.0 huggingface-hub-0.31.1 multiprocess-0.70.16 requests-2.32.3 xxhash-3.5.0`
```
$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.4 LTS"
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7566/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7566/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7565
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7565/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7565/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7565/events
|
https://github.com/huggingface/datasets/pull/7565
| 3,051,731,207
|
PR_kwDODunzps6VkFBm
| 7,565
|
add check if repo exists for dataset uploading
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/36135455?v=4",
"events_url": "https://api.github.com/users/Samoed/events{/privacy}",
"followers_url": "https://api.github.com/users/Samoed/followers",
"following_url": "https://api.github.com/users/Samoed/following{/other_user}",
"gists_url": "https://api.github.com/users/Samoed/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Samoed",
"id": 36135455,
"login": "Samoed",
"node_id": "MDQ6VXNlcjM2MTM1NDU1",
"organizations_url": "https://api.github.com/users/Samoed/orgs",
"received_events_url": "https://api.github.com/users/Samoed/received_events",
"repos_url": "https://api.github.com/users/Samoed/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Samoed/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Samoed/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Samoed",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7565). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-09T10:27:00
| 2025-05-19T16:03:17
| null |
NONE
| null | null | null |
Currently, I'm reuploading datasets for [`MTEB`](https://github.com/embeddings-benchmark/mteb/). Some of them have many splits (more than 20), and I'm encountering the error:
`Too many requests for https://huggingface.co/datasets/repo/create`.
It seems that this issue occurs because the dataset tries to recreate itself every time a split is uploaded. To resolve this, I've added a check to ensure that if the dataset already exists, it won't attempt to recreate it.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7565/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7565/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7565.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7565",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7565.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7565"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7564
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7564/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7564/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7564/events
|
https://github.com/huggingface/datasets/pull/7564
| 3,049,275,226
|
PR_kwDODunzps6VczLS
| 7,564
|
Implementation of iteration over values of a column in an IterableDataset object
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/47208659?v=4",
"events_url": "https://api.github.com/users/TopCoder2K/events{/privacy}",
"followers_url": "https://api.github.com/users/TopCoder2K/followers",
"following_url": "https://api.github.com/users/TopCoder2K/following{/other_user}",
"gists_url": "https://api.github.com/users/TopCoder2K/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TopCoder2K",
"id": 47208659,
"login": "TopCoder2K",
"node_id": "MDQ6VXNlcjQ3MjA4NjU5",
"organizations_url": "https://api.github.com/users/TopCoder2K/orgs",
"received_events_url": "https://api.github.com/users/TopCoder2K/received_events",
"repos_url": "https://api.github.com/users/TopCoder2K/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TopCoder2K/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TopCoder2K/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TopCoder2K",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"A couple of questions:\r\n1. I've noticed two strange things: 1) \"Around 80% of the final dataset is made of the `en_dataset`\" in https://huggingface.co/docs/datasets/stream, 2) \"Click on \"Pull request\" to send your to the project maintainers\" in https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md Are `en_dataset` and \"your [???]\" typos? If so, I can fix them in this PR.\r\n2. Should I update https://huggingface.co/docs/datasets/stream or https://huggingface.co/docs/datasets/access#iterabledataset to include the new feature?",
"Great ! and chained indexing was easy indeed, thanks :)\r\n\r\nregarding your questions:\r\n\r\n> I've noticed two strange things: 1) \"Around 80% of the final dataset is made of the en_dataset\" in https://huggingface.co/docs/datasets/stream, 2) \"Click on \"Pull request\" to send your to the project maintainers\" in https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md Are en_dataset and \"your [???]\" typos? If so, I can fix them in this PR.\r\n\r\nOh good catch, both should be fixed indeed. Feel free to open a new PR for those docs fixes\r\n\r\n> Should I update https://huggingface.co/docs/datasets/stream or https://huggingface.co/docs/datasets/access#iterabledataset to include the new feature?\r\n\r\nYep good idea, I think in both places, since /stream is supposed to be exhaustive, and /access already mentions accessing a specific column for `Dataset`",
"@lhoestq, thank you for the answers!\r\n\r\n> Yep good idea, I think in both places, since /stream is supposed to be exhaustive, and /access already mentions accessing a specific column for Dataset\r\n\r\n👍, I'll try to add something.\r\n\r\nBy the way, do you have any ideas about why the CI pipelines have failed? Essentially, I've already encountered these problems [here](https://github.com/huggingface/datasets/issues/7381#issuecomment-2863421974).\r\nI think `check_code_quality` has failed due to the usage of `pre-commit`. The problem seems to be the old version of the ruff hook. I've tried `v0.11.8` (the one that was installed with `pip install -e \".[quality]\"`) and `pre-commit` seems to work like `make style` now. However, I don't have any ideas about `pyav` since I don't know what it is...",
"I've updated /stream and /access, please check the style and clarity. By the way, I would like to add `IterableDataset.skip` near `IterableDataset.take` to mimic [slicing](https://huggingface.co/docs/datasets/access/#slicing). What do you think?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7564). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-08T14:59:22
| 2025-05-19T12:15:02
| 2025-05-19T12:15:02
|
CONTRIBUTOR
| null | null | null |
Refers to [this issue](https://github.com/huggingface/datasets/issues/7381).
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7564/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7564/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7564.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7564",
"merged_at": "2025-05-19T12:15:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7564.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7564"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7563
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7563/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7563/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7563/events
|
https://github.com/huggingface/datasets/pull/7563
| 3,046,351,253
|
PR_kwDODunzps6VS0QL
| 7,563
|
set dev version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7563). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-07T15:18:29
| 2025-05-07T15:21:05
| 2025-05-07T15:18:36
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7563/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7563/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7563.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7563",
"merged_at": "2025-05-07T15:18:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7563.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7563"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7562
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7562/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7562/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7562/events
|
https://github.com/huggingface/datasets/pull/7562
| 3,046,339,430
|
PR_kwDODunzps6VSxmx
| 7,562
|
release: 3.6.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7562). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-07T15:15:13
| 2025-05-07T15:17:46
| 2025-05-07T15:15:21
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7562/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7562/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7562.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7562",
"merged_at": "2025-05-07T15:15:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7562.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7562"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7561
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7561/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7561/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7561/events
|
https://github.com/huggingface/datasets/issues/7561
| 3,046,302,653
|
I_kwDODunzps61kuO9
| 7,561
|
NotImplementedError: <class 'datasets.iterable_dataset.RepeatExamplesIterable'> doesn't implement num_shards yet
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/32219669?v=4",
"events_url": "https://api.github.com/users/cyanic-selkie/events{/privacy}",
"followers_url": "https://api.github.com/users/cyanic-selkie/followers",
"following_url": "https://api.github.com/users/cyanic-selkie/following{/other_user}",
"gists_url": "https://api.github.com/users/cyanic-selkie/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cyanic-selkie",
"id": 32219669,
"login": "cyanic-selkie",
"node_id": "MDQ6VXNlcjMyMjE5NjY5",
"organizations_url": "https://api.github.com/users/cyanic-selkie/orgs",
"received_events_url": "https://api.github.com/users/cyanic-selkie/received_events",
"repos_url": "https://api.github.com/users/cyanic-selkie/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cyanic-selkie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyanic-selkie/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cyanic-selkie",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-05-07T15:05:42
| 2025-05-07T15:05:42
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When using `.repeat()` on an `IterableDataset`, this error gets thrown. There is [this thread](https://discuss.huggingface.co/t/making-an-infinite-iterabledataset/146192/5) that seems to imply the fix is trivial, but I don't know anything about this codebase, so I'm opening this issue rather than attempting to open a PR.
### Steps to reproduce the bug
1. Create an `IterableDataset`.
2. Call `.repeat(None)` on it.
3. Wrap it in a pytorch `DataLoader`
4. Iterate over it.
### Expected behavior
This should work normally.
### Environment info
datasets: 3.5.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7561/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7561/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7560
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7560/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7560/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7560/events
|
https://github.com/huggingface/datasets/pull/7560
| 3,046,265,500
|
PR_kwDODunzps6VShIc
| 7,560
|
fix decoding tests
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7560). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-07T14:56:14
| 2025-05-07T14:59:02
| 2025-05-07T14:56:20
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7560/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7560/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7560.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7560",
"merged_at": "2025-05-07T14:56:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7560.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7560"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7559
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7559/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7559/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7559/events
|
https://github.com/huggingface/datasets/pull/7559
| 3,046,177,078
|
PR_kwDODunzps6VSNiX
| 7,559
|
fix aiohttp import
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7559). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-07T14:31:32
| 2025-05-07T14:34:34
| 2025-05-07T14:31:38
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7559/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7559/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7559.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7559",
"merged_at": "2025-05-07T14:31:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7559.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7559"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7558
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7558/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7558/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7558/events
|
https://github.com/huggingface/datasets/pull/7558
| 3,046,066,628
|
PR_kwDODunzps6VR1gN
| 7,558
|
fix regression
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7558). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-07T13:56:03
| 2025-05-07T13:58:52
| 2025-05-07T13:56:18
|
MEMBER
| null | null | null |
reported in https://github.com/huggingface/datasets/pull/7557 (I just reorganized the condition)
wanted to apply this change to the original PR but github didn't let me apply it directly - merging this one instead
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7558/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7558/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7558.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7558",
"merged_at": "2025-05-07T13:56:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7558.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7558"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7557
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7557/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7557/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7557/events
|
https://github.com/huggingface/datasets/pull/7557
| 3,045,962,076
|
PR_kwDODunzps6VRenr
| 7,557
|
check for empty _formatting
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/winglian",
"id": 381258,
"login": "winglian",
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"repos_url": "https://api.github.com/users/winglian/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"type": "User",
"url": "https://api.github.com/users/winglian",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting and for the fix ! I tried to reorganize the condition in your PR but didn't get the right permission so. I ended up merging https://github.com/huggingface/datasets/pull/7558 directly so I can make a release today - I hope you don't mind"
] | 2025-05-07T13:22:37
| 2025-05-07T13:57:12
| 2025-05-07T13:57:12
|
CONTRIBUTOR
| null | null | null |
Fixes a regression from #7553 breaking shuffling of iterable datasets
<img width="884" alt="Screenshot 2025-05-07 at 9 16 52 AM" src="https://github.com/user-attachments/assets/d2f43c5f-4092-4efe-ac31-a32cbd025fe3" />
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7557/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7557/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7557.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7557",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7557.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7557"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7556
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7556/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7556/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7556/events
|
https://github.com/huggingface/datasets/pull/7556
| 3,043,615,210
|
PR_kwDODunzps6VJlTR
| 7,556
|
Add `--merge-pull-request` option for `convert_to_parquet`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/klamike",
"id": 17013474,
"login": "klamike",
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"repos_url": "https://api.github.com/users/klamike/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"type": "User",
"url": "https://api.github.com/users/klamike",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"This is ready for a review, happy to make any changes. The main question for maintainers is how this should interact with #7555. If my suggestion there is accepted, this PR can be kept as is. If not, more changes are required to merge all the PR parts."
] | 2025-05-06T18:05:05
| 2025-05-07T17:41:16
| null |
NONE
| null | null | null |
Closes #7527
Note that this implementation **will only merge the last PR in the case that they get split up by `push_to_hub`**. See https://github.com/huggingface/datasets/discussions/7555 for more details.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7556/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7556/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7556.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7556",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7556.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7556"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7554
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7554/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7554/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7554/events
|
https://github.com/huggingface/datasets/issues/7554
| 3,043,089,844
|
I_kwDODunzps61Yd20
| 7,554
|
datasets downloads and generates all splits, even though a single split is requested (for dataset with loading script)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/50171988?v=4",
"events_url": "https://api.github.com/users/sei-eschwartz/events{/privacy}",
"followers_url": "https://api.github.com/users/sei-eschwartz/followers",
"following_url": "https://api.github.com/users/sei-eschwartz/following{/other_user}",
"gists_url": "https://api.github.com/users/sei-eschwartz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sei-eschwartz",
"id": 50171988,
"login": "sei-eschwartz",
"node_id": "MDQ6VXNlcjUwMTcxOTg4",
"organizations_url": "https://api.github.com/users/sei-eschwartz/orgs",
"received_events_url": "https://api.github.com/users/sei-eschwartz/received_events",
"repos_url": "https://api.github.com/users/sei-eschwartz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sei-eschwartz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sei-eschwartz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sei-eschwartz",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! there has been some effort on allowing to download only a subset of splits in https://github.com/huggingface/datasets/pull/6832 but no one has been continuing this work so far. This would be a welcomed contribution though\n\nAlso note that loading script are often unoptimized, and we recommend using datasets in standard formats like Parquet instead.\n\nBtw there is a CLI tool to convert a loading script to parquet:\n\n```\ndatasets-cli convert_to_parquet <dataset-name> --trust_remote_code\n```",
"Closing in favor of #6832 "
] | 2025-05-06T14:43:38
| 2025-05-07T14:53:45
| 2025-05-07T14:53:44
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
`datasets` downloads and generates all splits, even though a single split is requested. [This](https://huggingface.co/datasets/jordiae/exebench) is the dataset in question. It uses a loading script. I am not 100% sure that this is a bug, because maybe with loading scripts `datasets` must actually process all the splits? But I thought loading scripts were designed to avoid this.
### Steps to reproduce the bug
See [this notebook](https://colab.research.google.com/drive/14kcXp_hgcdj-kIzK0bCG6taE-CLZPVvq?usp=sharing)
Or:
```python
from datasets import load_dataset
dataset = load_dataset('jordiae/exebench', split='test_synth', trust_remote_code=True)
```
### Expected behavior
I expected only the `test_synth` split to be downloaded and processed.
### Environment info
- `datasets` version: 3.5.1
- Platform: Linux-6.1.123+-x86_64-with-glibc2.35
- Python version: 3.11.12
- `huggingface_hub` version: 0.30.2
- PyArrow version: 18.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2025.3.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/50171988?v=4",
"events_url": "https://api.github.com/users/sei-eschwartz/events{/privacy}",
"followers_url": "https://api.github.com/users/sei-eschwartz/followers",
"following_url": "https://api.github.com/users/sei-eschwartz/following{/other_user}",
"gists_url": "https://api.github.com/users/sei-eschwartz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sei-eschwartz",
"id": 50171988,
"login": "sei-eschwartz",
"node_id": "MDQ6VXNlcjUwMTcxOTg4",
"organizations_url": "https://api.github.com/users/sei-eschwartz/orgs",
"received_events_url": "https://api.github.com/users/sei-eschwartz/received_events",
"repos_url": "https://api.github.com/users/sei-eschwartz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sei-eschwartz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sei-eschwartz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sei-eschwartz",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7554/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7554/timeline
| null |
duplicate
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7553
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7553/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7553/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7553/events
|
https://github.com/huggingface/datasets/pull/7553
| 3,042,953,907
|
PR_kwDODunzps6VHUNW
| 7,553
|
Rebatch arrow iterables before formatted iterable
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7553). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@lhoestq Our CI found an issue with this changeset causing a regression with shuffling iterable datasets \r\n<img width=\"884\" alt=\"Screenshot 2025-05-07 at 9 16 52 AM\" src=\"https://github.com/user-attachments/assets/bf7d9c7e-cc14-47da-8da6-d1a345992d7c\" />\r\n"
] | 2025-05-06T13:59:58
| 2025-05-07T13:17:41
| 2025-05-06T14:03:42
|
MEMBER
| null | null | null |
close https://github.com/huggingface/datasets/issues/7538 and https://github.com/huggingface/datasets/issues/7475
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7553/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7553/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7553.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7553",
"merged_at": "2025-05-06T14:03:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7553.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7553"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7552
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7552/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7552/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7552/events
|
https://github.com/huggingface/datasets/pull/7552
| 3,040,258,084
|
PR_kwDODunzps6U-BUv
| 7,552
|
Enable xet in push to hub
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7552). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-05T17:02:09
| 2025-05-06T12:42:51
| 2025-05-06T12:42:48
|
MEMBER
| null | null | null |
follows https://github.com/huggingface/huggingface_hub/pull/3035
related to https://github.com/huggingface/datasets/issues/7526
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7552/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7552/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7552.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7552",
"merged_at": "2025-05-06T12:42:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7552.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7552"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7551
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7551/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7551/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7551/events
|
https://github.com/huggingface/datasets/issues/7551
| 3,038,114,928
|
I_kwDODunzps61FfRw
| 7,551
|
Issue with offline mode and partial dataset cached
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/353245?v=4",
"events_url": "https://api.github.com/users/nrv/events{/privacy}",
"followers_url": "https://api.github.com/users/nrv/followers",
"following_url": "https://api.github.com/users/nrv/following{/other_user}",
"gists_url": "https://api.github.com/users/nrv/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nrv",
"id": 353245,
"login": "nrv",
"node_id": "MDQ6VXNlcjM1MzI0NQ==",
"organizations_url": "https://api.github.com/users/nrv/orgs",
"received_events_url": "https://api.github.com/users/nrv/received_events",
"repos_url": "https://api.github.com/users/nrv/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nrv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nrv/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nrv",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"It seems the problem comes from builder.py / create_config_id()\n\nOn the first call, when the cache is empty we have\n```\nconfig_kwargs = {'data_files': {'train': ['hf://datasets/uonlp/CulturaX@6a8734bc69fefcbb7735f4f9250f43e4cd7a442e/fr/fr_part_00038.parquet']}}\n```\nleading to config_id beeing 'default-2935e8cdcc21c613'\n\nthen, on the second call, \n```\nconfig_kwargs = {'data_files': 'fr/fr_part_00038.parquet'}\n```\nthus explaining why the hash is not the same, despite having the same parameter when calling load_dataset : data_files=\"fr/fr_part_00038.parquet\"",
"Same behavior with version 3.5.1",
"Same issue when loading `google/IndicGenBench_flores_in` with `dataset==2.21.0` and `dataset==3.6.0` .",
"\n\n\n> It seems the problem comes from builder.py / create_config_id()\n> \n> On the first call, when the cache is empty we have\n> \n> ```\n> config_kwargs = {'data_files': {'train': ['hf://datasets/uonlp/CulturaX@6a8734bc69fefcbb7735f4f9250f43e4cd7a442e/fr/fr_part_00038.parquet']}}\n> ```\n> \n> leading to config_id beeing 'default-2935e8cdcc21c613'\n> \n> then, on the second call,\n> \n> ```\n> config_kwargs = {'data_files': 'fr/fr_part_00038.parquet'}\n> ```\n> \n> thus explaining why the hash is not the same, despite having the same parameter when calling load_dataset : data_files=\"fr/fr_part_00038.parquet\"\n\n\nI have identified that the issue indeed lies in the `data_files` within `config_kwargs`. \nThe format and prefix of `data_files` differ depending on whether `HF_HUB_OFFLINE` is set, leading to different final `config_id` values. \nWhen I use other datasets without passing the `data_files` parameter, this issue does not occur.\n\nA possible solution might be to standardize the formatting of `data_files` within the `create_config_id` function."
] | 2025-05-04T16:49:37
| 2025-05-13T03:18:43
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Hi,
a issue related to #4760 here when loading a single file from a dataset, unable to access it in offline mode afterwards
### Steps to reproduce the bug
```python
import os
# os.environ["HF_HUB_OFFLINE"] = "1"
os.environ["HF_TOKEN"] = "xxxxxxxxxxxxxx"
import datasets
dataset_name = "uonlp/CulturaX"
data_files = "fr/fr_part_00038.parquet"
ds = datasets.load_dataset(dataset_name, split='train', data_files=data_files)
print(f"Dataset loaded : {ds}")
```
Once the file has been cached, I rerun with the HF_HUB_OFFLINE activated an get this error :
```
ValueError: Couldn't find cache for uonlp/CulturaX for config 'default-1e725f978350254e'
Available configs in the cache: ['default-2935e8cdcc21c613']
```
### Expected behavior
Should be able to access the previously cached files
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-5.4.0-215-generic-x86_64-with-glibc2.31
- Python version: 3.12.0
- `huggingface_hub` version: 0.27.0
- PyArrow version: 19.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7551/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7551/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7550
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7550/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7550/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7550/events
|
https://github.com/huggingface/datasets/pull/7550
| 3,037,017,367
|
PR_kwDODunzps6UzksN
| 7,550
|
disable aiohttp depend for python 3.13t free-threading compat
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/417764?v=4",
"events_url": "https://api.github.com/users/Qubitium/events{/privacy}",
"followers_url": "https://api.github.com/users/Qubitium/followers",
"following_url": "https://api.github.com/users/Qubitium/following{/other_user}",
"gists_url": "https://api.github.com/users/Qubitium/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Qubitium",
"id": 417764,
"login": "Qubitium",
"node_id": "MDQ6VXNlcjQxNzc2NA==",
"organizations_url": "https://api.github.com/users/Qubitium/orgs",
"received_events_url": "https://api.github.com/users/Qubitium/received_events",
"repos_url": "https://api.github.com/users/Qubitium/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Qubitium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Qubitium/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Qubitium",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-05-03T00:28:18
| 2025-05-03T00:28:24
| 2025-05-03T00:28:24
|
NONE
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/417764?v=4",
"events_url": "https://api.github.com/users/Qubitium/events{/privacy}",
"followers_url": "https://api.github.com/users/Qubitium/followers",
"following_url": "https://api.github.com/users/Qubitium/following{/other_user}",
"gists_url": "https://api.github.com/users/Qubitium/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Qubitium",
"id": 417764,
"login": "Qubitium",
"node_id": "MDQ6VXNlcjQxNzc2NA==",
"organizations_url": "https://api.github.com/users/Qubitium/orgs",
"received_events_url": "https://api.github.com/users/Qubitium/received_events",
"repos_url": "https://api.github.com/users/Qubitium/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Qubitium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Qubitium/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Qubitium",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7550/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7550/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7550.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7550",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7550.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7550"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7549
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7549/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7549/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7549/events
|
https://github.com/huggingface/datasets/issues/7549
| 3,036,272,015
|
I_kwDODunzps60-dWP
| 7,549
|
TypeError: Couldn't cast array of type string to null on webdataset format dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/117186571?v=4",
"events_url": "https://api.github.com/users/narugo1992/events{/privacy}",
"followers_url": "https://api.github.com/users/narugo1992/followers",
"following_url": "https://api.github.com/users/narugo1992/following{/other_user}",
"gists_url": "https://api.github.com/users/narugo1992/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/narugo1992",
"id": 117186571,
"login": "narugo1992",
"node_id": "U_kgDOBvwgCw",
"organizations_url": "https://api.github.com/users/narugo1992/orgs",
"received_events_url": "https://api.github.com/users/narugo1992/received_events",
"repos_url": "https://api.github.com/users/narugo1992/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/narugo1992/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/narugo1992/subscriptions",
"type": "User",
"url": "https://api.github.com/users/narugo1992",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"seems to get fixed by explicitly adding `dataset_infos.json` like this\n\n```json\n{\n \"default\": {\n \"description\": \"Image dataset with tags and ratings\",\n \"citation\": \"\",\n \"homepage\": \"\",\n \"license\": \"\",\n \"features\": {\n \"image\": {\n \"dtype\": \"image\",\n \"_type\": \"Image\"\n },\n \"json\": {\n \"id\": {\n \"dtype\": \"int32\",\n \"_type\": \"Value\"\n },\n \"width\": {\n \"dtype\": \"int32\",\n \"_type\": \"Value\"\n },\n \"height\": {\n \"dtype\": \"int32\",\n \"_type\": \"Value\"\n },\n \"rating\": {\n \"feature\": {\n \"dtype\": \"string\",\n \"_type\": \"Value\"\n },\n \"_type\": \"Sequence\"\n },\n \"general_tags\": {\n \"feature\": {\n \"dtype\": \"string\",\n \"_type\": \"Value\"\n },\n \"_type\": \"Sequence\"\n },\n \"character_tags\": {\n \"feature\": {\n \"dtype\": \"string\",\n \"_type\": \"Value\"\n },\n \"_type\": \"Sequence\"\n }\n }\n },\n \"builder_name\": \"webdataset\",\n \"config_name\": \"default\",\n \"version\": {\n \"version_str\": \"1.0.0\",\n \"description\": null,\n \"major\": 1,\n \"minor\": 0,\n \"patch\": 0\n }\n }\n}\n\n```\n\nwill close this issue if no further issues found"
] | 2025-05-02T15:18:07
| 2025-05-02T15:37:05
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
```python
from datasets import load_dataset
dataset = load_dataset("animetimm/danbooru-wdtagger-v4-w640-ws-30k")
```
got
```
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/arrow_writer.py", line 626, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 255, in pyarrow.lib.array
File "pyarrow/array.pxi", line 117, in pyarrow.lib._handle_arrow_array_protocol
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/arrow_writer.py", line 258, in __arrow_array__
out = cast_array_to_feature(
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper
return func(array, *args, **kwargs)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2006, in cast_array_to_feature
arrays = [
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2007, in <listcomp>
_c(array.field(name) if name in array_fields else null_array, subfeature)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper
return func(array, *args, **kwargs)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2066, in cast_array_to_feature
casted_array_values = _c(array.values, feature.feature)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper
return func(array, *args, **kwargs)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2103, in cast_array_to_feature
return array_cast(
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper
return func(array, *args, **kwargs)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1949, in array_cast
raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
TypeError: Couldn't cast array of type string to null
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/load.py", line 2084, in load_dataset
builder_instance.download_and_prepare(
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 925, in download_and_prepare
self._download_and_prepare(
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1649, in _download_and_prepare
super()._download_and_prepare(
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1001, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1487, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1644, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
`datasets==3.5.1` whats wrong
its inner json structure is like
```yaml
features:
- name: "image"
dtype: "image"
- name: "json.id"
dtype: "string"
- name: "json.width"
dtype: "int32"
- name: "json.height"
dtype: "int32"
- name: "json.rating"
sequence:
dtype: "string"
- name: "json.general_tags"
sequence:
dtype: "string"
- name: "json.character_tags"
sequence:
dtype: "string"
```
i'm 100% sure all the jsons satisfies the abovementioned format.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("animetimm/danbooru-wdtagger-v4-w640-ws-30k")
```
### Expected behavior
load the dataset successfully, with the abovementioned json format and webp images
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 3.5.1
- Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
- Python version: 3.10.16
- `huggingface_hub` version: 0.30.2
- PyArrow version: 20.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7549/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7549/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7548
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7548/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7548/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7548/events
|
https://github.com/huggingface/datasets/issues/7548
| 3,035,568,851
|
I_kwDODunzps607xrT
| 7,548
|
Python 3.13t (free threads) Compat
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/417764?v=4",
"events_url": "https://api.github.com/users/Qubitium/events{/privacy}",
"followers_url": "https://api.github.com/users/Qubitium/followers",
"following_url": "https://api.github.com/users/Qubitium/following{/other_user}",
"gists_url": "https://api.github.com/users/Qubitium/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Qubitium",
"id": 417764,
"login": "Qubitium",
"node_id": "MDQ6VXNlcjQxNzc2NA==",
"organizations_url": "https://api.github.com/users/Qubitium/orgs",
"received_events_url": "https://api.github.com/users/Qubitium/received_events",
"repos_url": "https://api.github.com/users/Qubitium/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Qubitium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Qubitium/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Qubitium",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Update: `datasets` use `aiohttp` for data streaming and from what I understand data streaming is useful for large datasets that do not fit in memory and/or multi-modal datasets like image/audio where you only what the actual binary bits to fed in as needed. \n\nHowever, there are also many cases where aiohttp will never be used. Text datasets that are not huge, relative to machine spec, and non-multi-modal datasets. \n\nGetting `aiohttp` fixed for `free threading` appeals to be a large task that is not going to be get done in a quick manner. It may be faster to make `aiohttp` optional and not forced build. Otherwise, testing python 3.13t is going to be a painful install. \n\nI have created a fork/branch that temp disables aiohttp import so non-streaming usage of datasets can be tested under python 3.13.t:\n\nhttps://github.com/Qubitium/datasets/tree/disable-aiohttp-depend",
"We are mostly relying on `huggingface_hub` which uses `requests` to stream files from Hugging Face, so maybe we can move aiohttp to optional dependencies now. Would it solve your issue ? Btw what do you think of `datasets` in the free-threading setting ?",
"> We are mostly relying on `huggingface_hub` which uses `requests` to stream files from Hugging Face, so maybe we can move aiohttp to optional dependencies now. Would it solve your issue ? Btw what do you think of `datasets` in the free-threading setting ?\n\nI am testing transformers + dataset (simple text dataset usage) + GPTQModel for quantization and there were no issues encountered with python 3.13t but my test-case is the base-bare minimal test-case since dataset is not sharded, fully in-memory, text-only, small, not used for training. \n\nOn the technical side, dataset is almost always 100% read-only so there should be zero locking issues but I have not checked the dataset internals so there may be cases where streaming, sharding, and/or cases where datset memory/states are updated needs a per dataset `threading.lock`. \n\nSo yes, making `aiohttp` optional will definitely solve my issue. There is also a companion (datasets and tokenizers usually go hand-in-hand) issue with `Tokenizers` as well but that's simple enough with package version update: https://github.com/huggingface/tokenizers/pull/1774\n",
"Ok I see ! Anyway feel free to edit the setup.py to move aiohttp to optional (tests) dependencies and open a PR, we can run the CI to see if it's ok as a change",
"actually there is https://github.com/huggingface/datasets/pull/7294/ already, let's see if we can merge it",
"wouldn't it be the good reason to switch to `httpx`? 😄 (would require slightly more work, short term agree with https://github.com/huggingface/datasets/issues/7548#issuecomment-2854405923)",
"I made `aiohttp` optional in `datasets` 3.6.0 :)\n\n`datasets` doesn't use it directly anyway, it's only used when someone wants to download files from HTTP URLs outside of HF"
] | 2025-05-02T09:20:09
| 2025-05-12T15:11:32
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
Cannot install `datasets` under `python 3.13t` due to dependency on `aiohttp` and aiohttp cannot be built for free-threading python.
The `free threading` support issue in `aiothttp` is active since August 2024! Ouch.
https://github.com/aio-libs/aiohttp/issues/8796#issue-2475941784
`pip install dataset`
```bash
(vm313t) root@gpu-base:~/GPTQModel# pip install datasets
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)")': /simple/datasets/
Collecting datasets
Using cached datasets-3.5.1-py3-none-any.whl.metadata (19 kB)
Requirement already satisfied: filelock in /root/vm313t/lib/python3.13t/site-packages (from datasets) (3.18.0)
Requirement already satisfied: numpy>=1.17 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (2.2.5)
Collecting pyarrow>=15.0.0 (from datasets)
Using cached pyarrow-20.0.0-cp313-cp313t-manylinux_2_28_x86_64.whl.metadata (3.3 kB)
Collecting dill<0.3.9,>=0.3.0 (from datasets)
Using cached dill-0.3.8-py3-none-any.whl.metadata (10 kB)
Collecting pandas (from datasets)
Using cached pandas-2.2.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (89 kB)
Requirement already satisfied: requests>=2.32.2 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (2.32.3)
Requirement already satisfied: tqdm>=4.66.3 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (4.67.1)
Collecting xxhash (from datasets)
Using cached xxhash-3.5.0-cp313-cp313t-linux_x86_64.whl
Collecting multiprocess<0.70.17 (from datasets)
Using cached multiprocess-0.70.16-py312-none-any.whl.metadata (7.2 kB)
Collecting fsspec<=2025.3.0,>=2023.1.0 (from fsspec[http]<=2025.3.0,>=2023.1.0->datasets)
Using cached fsspec-2025.3.0-py3-none-any.whl.metadata (11 kB)
Collecting aiohttp (from datasets)
Using cached aiohttp-3.11.18.tar.gz (7.7 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: huggingface-hub>=0.24.0 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (0.30.2)
Requirement already satisfied: packaging in /root/vm313t/lib/python3.13t/site-packages (from datasets) (25.0)
Requirement already satisfied: pyyaml>=5.1 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (6.0.2)
Collecting aiohappyeyeballs>=2.3.0 (from aiohttp->datasets)
Using cached aiohappyeyeballs-2.6.1-py3-none-any.whl.metadata (5.9 kB)
Collecting aiosignal>=1.1.2 (from aiohttp->datasets)
Using cached aiosignal-1.3.2-py2.py3-none-any.whl.metadata (3.8 kB)
Collecting attrs>=17.3.0 (from aiohttp->datasets)
Using cached attrs-25.3.0-py3-none-any.whl.metadata (10 kB)
Collecting frozenlist>=1.1.1 (from aiohttp->datasets)
Using cached frozenlist-1.6.0-cp313-cp313t-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (16 kB)
Collecting multidict<7.0,>=4.5 (from aiohttp->datasets)
Using cached multidict-6.4.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.3 kB)
Collecting propcache>=0.2.0 (from aiohttp->datasets)
Using cached propcache-0.3.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (10 kB)
Collecting yarl<2.0,>=1.17.0 (from aiohttp->datasets)
Using cached yarl-1.20.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (72 kB)
Requirement already satisfied: idna>=2.0 in /root/vm313t/lib/python3.13t/site-packages (from yarl<2.0,>=1.17.0->aiohttp->datasets) (3.10)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /root/vm313t/lib/python3.13t/site-packages (from huggingface-hub>=0.24.0->datasets) (4.13.2)
Requirement already satisfied: charset-normalizer<4,>=2 in /root/vm313t/lib/python3.13t/site-packages (from requests>=2.32.2->datasets) (3.4.1)
Requirement already satisfied: urllib3<3,>=1.21.1 in /root/vm313t/lib/python3.13t/site-packages (from requests>=2.32.2->datasets) (2.4.0)
Requirement already satisfied: certifi>=2017.4.17 in /root/vm313t/lib/python3.13t/site-packages (from requests>=2.32.2->datasets) (2025.4.26)
Collecting python-dateutil>=2.8.2 (from pandas->datasets)
Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB)
Collecting pytz>=2020.1 (from pandas->datasets)
Using cached pytz-2025.2-py2.py3-none-any.whl.metadata (22 kB)
Collecting tzdata>=2022.7 (from pandas->datasets)
Using cached tzdata-2025.2-py2.py3-none-any.whl.metadata (1.4 kB)
Collecting six>=1.5 (from python-dateutil>=2.8.2->pandas->datasets)
Using cached six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB)
Using cached datasets-3.5.1-py3-none-any.whl (491 kB)
Using cached dill-0.3.8-py3-none-any.whl (116 kB)
Using cached fsspec-2025.3.0-py3-none-any.whl (193 kB)
Using cached multiprocess-0.70.16-py312-none-any.whl (146 kB)
Using cached multidict-6.4.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (220 kB)
Using cached yarl-1.20.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (404 kB)
Using cached aiohappyeyeballs-2.6.1-py3-none-any.whl (15 kB)
Using cached aiosignal-1.3.2-py2.py3-none-any.whl (7.6 kB)
Using cached attrs-25.3.0-py3-none-any.whl (63 kB)
Using cached frozenlist-1.6.0-cp313-cp313t-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (385 kB)
Using cached propcache-0.3.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (282 kB)
Using cached pyarrow-20.0.0-cp313-cp313t-manylinux_2_28_x86_64.whl (42.2 MB)
Using cached pandas-2.2.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.9 MB)
Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
Using cached pytz-2025.2-py2.py3-none-any.whl (509 kB)
Using cached six-1.17.0-py2.py3-none-any.whl (11 kB)
Using cached tzdata-2025.2-py2.py3-none-any.whl (347 kB)
Building wheels for collected packages: aiohttp
Building wheel for aiohttp (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for aiohttp (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [156 lines of output]
*********************
* Accelerated build *
*********************
/tmp/pip-build-env-wjqi8_7w/overlay/lib/python3.13t/site-packages/setuptools/dist.py:759: SetuptoolsDeprecationWarning: License classifiers are deprecated.
!!
********************************************************************************
Please consider removing the following classifiers in favor of a SPDX license expression:
License :: OSI Approved :: Apache Software License
See https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#license for details.
********************************************************************************
!!
self._finalize_license_expression()
running bdist_wheel
running build
running build_py
creating build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/typedefs.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/http_parser.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/client_reqrep.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/client_ws.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_app.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/http_websocket.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/resolver.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/tracing.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/http_writer.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/http_exceptions.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/log.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/__init__.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_runner.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/worker.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/connector.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/client_exceptions.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_middlewares.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/tcp_helpers.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_response.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_server.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_request.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_urldispatcher.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_exceptions.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/formdata.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/streams.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/multipart.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_routedef.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_ws.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/payload.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/client_proto.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_log.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/base_protocol.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/payload_streamer.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/http.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_fileresponse.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/test_utils.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/client.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/cookiejar.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/compression_utils.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/hdrs.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/helpers.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/pytest_plugin.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_protocol.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/abc.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
creating build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
copying aiohttp/_websocket/__init__.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
copying aiohttp/_websocket/writer.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
copying aiohttp/_websocket/models.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
copying aiohttp/_websocket/reader.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
copying aiohttp/_websocket/reader_c.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
copying aiohttp/_websocket/helpers.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
copying aiohttp/_websocket/reader_py.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
running egg_info
writing aiohttp.egg-info/PKG-INFO
writing dependency_links to aiohttp.egg-info/dependency_links.txt
writing requirements to aiohttp.egg-info/requires.txt
writing top-level names to aiohttp.egg-info/top_level.txt
reading manifest file 'aiohttp.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'aiohttp' anywhere in distribution
warning: no files found matching '*.pyi' anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
warning: no previously-included files matching '*.so' found anywhere in distribution
warning: no previously-included files matching '*.lib' found anywhere in distribution
warning: no previously-included files matching '*.dll' found anywhere in distribution
warning: no previously-included files matching '*.a' found anywhere in distribution
warning: no previously-included files matching '*.obj' found anywhere in distribution
warning: no previously-included files found matching 'aiohttp/*.html'
no previously-included directories found matching 'docs/_build'
adding license file 'LICENSE.txt'
writing manifest file 'aiohttp.egg-info/SOURCES.txt'
copying aiohttp/_cparser.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/_find_header.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/_headers.pxi -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/_http_parser.pyx -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/_http_writer.pyx -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/py.typed -> build/lib.linux-x86_64-cpython-313t/aiohttp
creating build/lib.linux-x86_64-cpython-313t/aiohttp/.hash
copying aiohttp/.hash/_cparser.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash
copying aiohttp/.hash/_find_header.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash
copying aiohttp/.hash/_http_parser.pyx.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash
copying aiohttp/.hash/_http_writer.pyx.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash
copying aiohttp/.hash/hdrs.py.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash
copying aiohttp/_websocket/mask.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
copying aiohttp/_websocket/mask.pyx -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
copying aiohttp/_websocket/reader_c.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
creating build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash
copying aiohttp/_websocket/.hash/mask.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash
copying aiohttp/_websocket/.hash/mask.pyx.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash
copying aiohttp/_websocket/.hash/reader_c.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash
running build_ext
building 'aiohttp._websocket.mask' extension
creating build/temp.linux-x86_64-cpython-313t/aiohttp/_websocket
x86_64-linux-gnu-gcc -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O2 -Wall -g -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -fcf-protection -fPIC -I/root/vm313t/include -I/usr/include/python3.13t -c aiohttp/_websocket/mask.c -o build/temp.linux-x86_64-cpython-313t/aiohttp/_websocket/mask.o
aiohttp/_websocket/mask.c:1864:80: error: unknown type name ‘__pyx_vectorcallfunc’; did you mean ‘vectorcallfunc’?
1864 | static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw);
| ^~~~~~~~~~~~~~~~~~~~
| vectorcallfunc
aiohttp/_websocket/mask.c: In function ‘__pyx_f_7aiohttp_10_websocket_4mask__websocket_mask_cython’:
aiohttp/_websocket/mask.c:2905:3: warning: ‘Py_OptimizeFlag’ is deprecated [-Wdeprecated-declarations]
2905 | if (unlikely(__pyx_assertions_enabled())) {
| ^~
In file included from /usr/include/python3.13t/Python.h:76,
from aiohttp/_websocket/mask.c:16:
/usr/include/python3.13t/cpython/pydebug.h:13:37: note: declared here
13 | Py_DEPRECATED(3.12) PyAPI_DATA(int) Py_OptimizeFlag;
| ^~~~~~~~~~~~~~~
aiohttp/_websocket/mask.c: At top level:
aiohttp/_websocket/mask.c:4846:69: error: unknown type name ‘__pyx_vectorcallfunc’; did you mean ‘vectorcallfunc’?
4846 | static PyObject *__Pyx_PyVectorcall_FastCallDict_kw(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw)
| ^~~~~~~~~~~~~~~~~~~~
| vectorcallfunc
aiohttp/_websocket/mask.c:4891:80: error: unknown type name ‘__pyx_vectorcallfunc’; did you mean ‘vectorcallfunc’?
4891 | static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw)
| ^~~~~~~~~~~~~~~~~~~~
| vectorcallfunc
aiohttp/_websocket/mask.c: In function ‘__Pyx_CyFunction_CallAsMethod’:
aiohttp/_websocket/mask.c:5580:6: error: unknown type name ‘__pyx_vectorcallfunc’; did you mean ‘vectorcallfunc’?
5580 | __pyx_vectorcallfunc vc = __Pyx_CyFunction_func_vectorcall(cyfunc);
| ^~~~~~~~~~~~~~~~~~~~
| vectorcallfunc
aiohttp/_websocket/mask.c:1954:45: warning: initialization of ‘int’ from ‘vectorcallfunc’ {aka ‘struct _object * (*)(struct _object *, struct _object * const*, long unsigned int, struct _object *)’} makes integer from pointer without a cast [-Wint-conversion]
1954 | #define __Pyx_CyFunction_func_vectorcall(f) (((PyCFunctionObject*)f)->vectorcall)
| ^
aiohttp/_websocket/mask.c:5580:32: note: in expansion of macro ‘__Pyx_CyFunction_func_vectorcall’
5580 | __pyx_vectorcallfunc vc = __Pyx_CyFunction_func_vectorcall(cyfunc);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
aiohttp/_websocket/mask.c:5583:16: warning: implicit declaration of function ‘__Pyx_PyVectorcall_FastCallDict’ [-Wimplicit-function-declaration]
5583 | return __Pyx_PyVectorcall_FastCallDict(func, vc, &PyTuple_GET_ITEM(args, 0), (size_t)PyTuple_GET_SIZE(args), kw);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
aiohttp/_websocket/mask.c:5583:16: warning: returning ‘int’ from a function with return type ‘PyObject *’ {aka ‘struct _object *’} makes pointer from integer without a cast [-Wint-conversion]
5583 | return __Pyx_PyVectorcall_FastCallDict(func, vc, &PyTuple_GET_ITEM(args, 0), (size_t)PyTuple_GET_SIZE(args), kw);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for aiohttp
Failed to build aiohttp
ERROR: Failed to build installable wheels for some pyproject.toml based projects (aiohttp)
```
### Steps to reproduce the bug
See above
### Expected behavior
Install
### Environment info
Ubuntu 24.04
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7548/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7548/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7547
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7547/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7547/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7547/events
|
https://github.com/huggingface/datasets/pull/7547
| 3,034,830,291
|
PR_kwDODunzps6UsTuF
| 7,547
|
Avoid global umask for setting file mode.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/1282383?v=4",
"events_url": "https://api.github.com/users/ryan-clancy/events{/privacy}",
"followers_url": "https://api.github.com/users/ryan-clancy/followers",
"following_url": "https://api.github.com/users/ryan-clancy/following{/other_user}",
"gists_url": "https://api.github.com/users/ryan-clancy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ryan-clancy",
"id": 1282383,
"login": "ryan-clancy",
"node_id": "MDQ6VXNlcjEyODIzODM=",
"organizations_url": "https://api.github.com/users/ryan-clancy/orgs",
"received_events_url": "https://api.github.com/users/ryan-clancy/received_events",
"repos_url": "https://api.github.com/users/ryan-clancy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ryan-clancy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryan-clancy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ryan-clancy",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7547). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-01T22:24:24
| 2025-05-06T13:05:00
| 2025-05-06T13:05:00
|
CONTRIBUTOR
| null | null | null |
This PR updates the method for setting the permissions on `cache_path` after calling `shutil.move`. The call to `shutil.move` may not preserve permissions if the source and destination are on different filesystems. Reading and resetting umask can cause race conditions, so directly read what permissions were set for the `temp_file` instead.
This fixes https://github.com/huggingface/datasets/issues/7536.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7547/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7547/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7547.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7547",
"merged_at": "2025-05-06T13:05:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7547.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7547"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7546
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7546/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7546/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7546/events
|
https://github.com/huggingface/datasets/issues/7546
| 3,034,018,298
|
I_kwDODunzps6013H6
| 7,546
|
Large memory use when loading large datasets to a ZFS pool
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6875946?v=4",
"events_url": "https://api.github.com/users/FredHaa/events{/privacy}",
"followers_url": "https://api.github.com/users/FredHaa/followers",
"following_url": "https://api.github.com/users/FredHaa/following{/other_user}",
"gists_url": "https://api.github.com/users/FredHaa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/FredHaa",
"id": 6875946,
"login": "FredHaa",
"node_id": "MDQ6VXNlcjY4NzU5NDY=",
"organizations_url": "https://api.github.com/users/FredHaa/orgs",
"received_events_url": "https://api.github.com/users/FredHaa/received_events",
"repos_url": "https://api.github.com/users/FredHaa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/FredHaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FredHaa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/FredHaa",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! datasets are memory mapped from disk, so they don't fill out your RAM. Not sure what's the source of your memory issue.\n\nWhat kind of system are you using ? and what kind of disk ?",
"Well, the fact of the matter is that my RAM is getting filled out by running the given example, as shown in [this video](https://streamable.com/usb0ql).\n\nMy system is a GPU server running Ubuntu. The disk is a SATA SSD attached to the server using a backplane. It is formatted with ZFS, mounted in /cache, and my HF_HOME is set to /cache/hf\n\nI really need this fixed, so I am more than willing to test out various suggestions you might have, or write a PR if we can figure out what is going on.",
"I'm not super familiar with ZFS, but it looks like it loads the data in memory when the files are memory mapped, which is an issue.\n\nMaybe it's a caching mechanism ? Since `datasets` accesses every memory mapped file to read a small part (the metadata of the arrow record batches), maybe ZFS brings the whole files in memory for quicker subsequent reads. This is an antipattern when it comes to lazy loading datasets of that size though",
"This is the answer.\n\nI tried changing my HF_HOME to an NFS share, and no RAM is then consumed loading the dataset.\n\nI will try to see if I can find a way to configure the ZFS pool to not cache the files (disabling the ARC/primary cache didn't work), and if I do write the solution in this issue. If I can't I guess I have to reformat my cache drive."
] | 2025-05-01T14:43:47
| 2025-05-13T13:30:09
| 2025-05-13T13:29:53
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
When I load large parquet based datasets from the hub like `MLCommons/peoples_speech` using `load_dataset`, all my memory (500GB) is used and isn't released after loading, meaning that the process is terminated by the kernel if I try to load an additional dataset. This makes it impossible to train models using multiple large datasets.
### Steps to reproduce the bug
`uv run --with datasets==3.5.1 python`
```python
from datasets import load_dataset
load_dataset('MLCommons/peoples_speech', 'clean')
load_dataset('mozilla-foundation/common_voice_17_0', 'en')
```
### Expected behavior
I would expect that a lot less than 500GB of RAM would be required to load the dataset, or at least that the RAM usage would be cleared as soon as the dataset is loaded (and thus reside as a memory mapped file) such that other datasets can be loaded.
### Environment info
I am currently using the latest datasets==3.5.1 but I have had the same problem with multiple other versions.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6875946?v=4",
"events_url": "https://api.github.com/users/FredHaa/events{/privacy}",
"followers_url": "https://api.github.com/users/FredHaa/followers",
"following_url": "https://api.github.com/users/FredHaa/following{/other_user}",
"gists_url": "https://api.github.com/users/FredHaa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/FredHaa",
"id": 6875946,
"login": "FredHaa",
"node_id": "MDQ6VXNlcjY4NzU5NDY=",
"organizations_url": "https://api.github.com/users/FredHaa/orgs",
"received_events_url": "https://api.github.com/users/FredHaa/received_events",
"repos_url": "https://api.github.com/users/FredHaa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/FredHaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FredHaa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/FredHaa",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7546/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7546/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7545
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7545/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7545/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7545/events
|
https://github.com/huggingface/datasets/issues/7545
| 3,031,617,547
|
I_kwDODunzps60stAL
| 7,545
|
Networked Pull Through Cache
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/8764173?v=4",
"events_url": "https://api.github.com/users/wrmedford/events{/privacy}",
"followers_url": "https://api.github.com/users/wrmedford/followers",
"following_url": "https://api.github.com/users/wrmedford/following{/other_user}",
"gists_url": "https://api.github.com/users/wrmedford/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wrmedford",
"id": 8764173,
"login": "wrmedford",
"node_id": "MDQ6VXNlcjg3NjQxNzM=",
"organizations_url": "https://api.github.com/users/wrmedford/orgs",
"received_events_url": "https://api.github.com/users/wrmedford/received_events",
"repos_url": "https://api.github.com/users/wrmedford/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wrmedford/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wrmedford/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wrmedford",
"user_view_type": "public"
}
|
[
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] |
open
| false
| null |
[] | null |
[] | 2025-04-30T15:16:33
| 2025-04-30T15:16:33
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Feature request
Introduce a HF_DATASET_CACHE_NETWORK_LOCATION configuration (e.g. an environment variable) together with a companion network cache service.
Enable a three-tier cache lookup for datasets:
1. Local on-disk cache
2. Configurable network cache proxy
3. Official Hugging Face Hub
### Motivation
- Distributed training & ephemeral jobs: In high-performance or containerized clusters, relying solely on a local disk cache either becomes a streaming bottleneck or incurs a heavy cold-start penalty as each job must re-download datasets.
- Traffic & cost reduction: A pull-through network cache lets multiple consumers share a common cache layer, reducing duplicate downloads from the Hub and lowering egress costs.
- Better streaming adoption: By offloading repeat dataset pulls to a locally managed cache proxy, streaming workloads can achieve higher throughput and more predictable latency.
- Proven pattern: Similar proxy-cache solutions (e.g. Harbor’s Proxy Cache for Docker images) have demonstrated reliability and performance at scale: https://goharbor.io/docs/2.1.0/administration/configure-proxy-cache/
### Your contribution
I’m happy to draft the initial PR for adding HF_DATASET_CACHE_NETWORK_LOCATION support in datasets and sketch out a minimal cache-service prototype.
I have limited bandwidth so I would be looking for collaborators if anyone else is interested.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7545/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7545/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7544
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7544/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7544/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7544/events
|
https://github.com/huggingface/datasets/pull/7544
| 3,027,024,285
|
PR_kwDODunzps6UR4Nn
| 7,544
|
Add try_original_type to DatasetDict.map
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/11156001?v=4",
"events_url": "https://api.github.com/users/yoshitomo-matsubara/events{/privacy}",
"followers_url": "https://api.github.com/users/yoshitomo-matsubara/followers",
"following_url": "https://api.github.com/users/yoshitomo-matsubara/following{/other_user}",
"gists_url": "https://api.github.com/users/yoshitomo-matsubara/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yoshitomo-matsubara",
"id": 11156001,
"login": "yoshitomo-matsubara",
"node_id": "MDQ6VXNlcjExMTU2MDAx",
"organizations_url": "https://api.github.com/users/yoshitomo-matsubara/orgs",
"received_events_url": "https://api.github.com/users/yoshitomo-matsubara/received_events",
"repos_url": "https://api.github.com/users/yoshitomo-matsubara/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yoshitomo-matsubara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoshitomo-matsubara/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yoshitomo-matsubara",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7544). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Sure! I just committed the changes",
"@lhoestq \r\nLet me know if there are other things to do before merge or other places to add `try_original_type` argument "
] | 2025-04-29T04:39:44
| 2025-05-05T14:42:49
| 2025-05-05T14:42:49
|
CONTRIBUTOR
| null | null | null |
This PR resolves #7472 for DatasetDict
The previously merged PR #7483 added `try_original_type` to ArrowDataset, but DatasetDict misses `try_original_type`
Cc: @lhoestq
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7544/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7544/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7544.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7544",
"merged_at": "2025-05-05T14:42:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7544.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7544"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7543
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7543/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7543/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7543/events
|
https://github.com/huggingface/datasets/issues/7543
| 3,026,867,706
|
I_kwDODunzps60alX6
| 7,543
|
The memory-disk mapping failure issue of the map function(resolved, but there are some suggestions.)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/76415358?v=4",
"events_url": "https://api.github.com/users/jxma20/events{/privacy}",
"followers_url": "https://api.github.com/users/jxma20/followers",
"following_url": "https://api.github.com/users/jxma20/following{/other_user}",
"gists_url": "https://api.github.com/users/jxma20/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxma20",
"id": 76415358,
"login": "jxma20",
"node_id": "MDQ6VXNlcjc2NDE1MzU4",
"organizations_url": "https://api.github.com/users/jxma20/orgs",
"received_events_url": "https://api.github.com/users/jxma20/received_events",
"repos_url": "https://api.github.com/users/jxma20/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxma20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxma20/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxma20",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-04-29T03:04:59
| 2025-04-30T02:22:17
| 2025-04-30T02:22:17
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
### Describe the bug
## bug
When the map function processes a large dataset, it temporarily stores the data in a cache file on the disk. After the data is stored, the memory occupied by it is released. Therefore, when using the map function to process a large-scale dataset, only a dataset space of the size of `writer_batch_size` will be occupied in memory.
However, I found that the map function does not actually reduce memory usage when I used it. At first, I thought there was a bug in the program, causing a memory leak—meaning the memory was not released after the data was stored in the cache. But later, I used a Linux command to check for recently modified files during program execution and found that no new files were created or modified. This indicates that the program did not store the dataset in the disk cache.
## bug solved
After modifying the parameters of the map function multiple times, I discovered the `cache_file_name` parameter. By changing it, the cache file can be stored in the specified directory. After making this change, I noticed that the cache file appeared. Initially, I found this quite incredible, but then I wondered if the cache file might have failed to be stored in a certain folder. This could be related to the fact that I don't have root privileges.
So, I delved into the source code of the map function to find out where the cache file would be stored by default. Eventually, I found the function `def _get_cache_file_path(self, fingerprint):`, which automatically generates the storage path for the cache file. The output was as follows: `/tmp/hf_datasets-j5qco9ug/cache-f2830487643b9cc2.arrow`. My hypothesis was confirmed: the lack of root privileges indeed prevented the cache file from being stored, which in turn prevented the release of memory. Therefore, changing the storage location to a folder where I have write access resolved the issue.
### Steps to reproduce the bug
my code
`train_data = train_data.map(process_fun, remove_columns=['image_name', 'question_type', 'concern', 'question', 'candidate_answers', 'answer'])`
### Expected behavior
Although my bug has been resolved, it still took me nearly a week to search for relevant information and debug the program. However, if a warning or error message about insufficient cache file write permissions could be provided during program execution, I might have been able to identify the cause more quickly. Therefore, I hope this aspect can be improved. I am documenting this bug here so that friends who encounter similar issues can solve their problems in a timely manner.
### Environment info
python: 3.10.15
datasets: 3.5.0
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/76415358?v=4",
"events_url": "https://api.github.com/users/jxma20/events{/privacy}",
"followers_url": "https://api.github.com/users/jxma20/followers",
"following_url": "https://api.github.com/users/jxma20/following{/other_user}",
"gists_url": "https://api.github.com/users/jxma20/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxma20",
"id": 76415358,
"login": "jxma20",
"node_id": "MDQ6VXNlcjc2NDE1MzU4",
"organizations_url": "https://api.github.com/users/jxma20/orgs",
"received_events_url": "https://api.github.com/users/jxma20/received_events",
"repos_url": "https://api.github.com/users/jxma20/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxma20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxma20/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxma20",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7543/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7543/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7542
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7542/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7542/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7542/events
|
https://github.com/huggingface/datasets/pull/7542
| 3,025,054,630
|
PR_kwDODunzps6ULHxo
| 7,542
|
set dev version
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7542). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-04-28T14:03:48
| 2025-04-28T14:08:37
| 2025-04-28T14:04:00
|
MEMBER
| null | null | null | null |
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7542/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7542/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7542.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7542",
"merged_at": "2025-04-28T14:04:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7542.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7542"
}
| true
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 15