url stringlengths 61 61 | repository_url stringclasses 1 value | labels_url stringlengths 75 75 | comments_url stringlengths 70 70 | events_url stringlengths 68 68 | html_url stringlengths 51 51 | id int64 1.13B 3.52B | node_id stringlengths 18 18 | number int64 3.7k 7.82k | title stringlengths 1 290 | labels listlengths 0 4 | state stringclasses 2 values | locked bool 1 class | assignee null | assignees listlengths 0 3 | milestone null | comments int64 0 49 | created_at int64 1.64k 1.76k | updated_at int64 1.64k 1.76k | closed_at int64 1.64k 1.76k ⌀ | author_association stringclasses 4 values | type null | active_lock_reason null | body stringlengths 1 58.6k ⌀ | closed_by null | timeline_url stringlengths 70 70 | performed_via_github_app null | state_reason stringclasses 4 values | user.login stringlengths 3 26 | user.id int64 3.5k 219M | user.node_id stringlengths 12 20 | user.avatar_url stringlengths 48 53 | user.gravatar_id stringclasses 1 value | user.url stringlengths 32 55 | user.html_url stringlengths 22 45 | user.followers_url stringlengths 42 65 | user.following_url stringlengths 55 78 | user.gists_url stringlengths 48 71 | user.starred_url stringlengths 55 78 | user.subscriptions_url stringlengths 46 69 | user.organizations_url stringlengths 37 60 | user.repos_url stringlengths 38 61 | user.events_url stringlengths 49 72 | user.received_events_url stringlengths 48 71 | user.type stringclasses 1 value | user.user_view_type stringclasses 1 value | user.site_admin bool 1 class | sub_issues_summary.total float64 0 0 | sub_issues_summary.completed float64 0 0 | sub_issues_summary.percent_completed float64 0 0 | issue_dependencies_summary.blocked_by float64 0 0 | issue_dependencies_summary.total_blocked_by float64 0 0 | issue_dependencies_summary.blocking float64 0 0 | issue_dependencies_summary.total_blocking float64 0 0 | reactions.url stringlengths 71 71 | reactions.total_count int64 0 61 | reactions.+1 int64 0 39 | reactions.-1 int64 0 0 | reactions.laugh int64 0 0 | reactions.hooray int64 0 2 | reactions.confused int64 0 3 | reactions.heart int64 0 22 | reactions.rocket int64 0 6 | reactions.eyes int64 0 5 | draft null | pull_request.url null | pull_request.html_url null | pull_request.diff_url null | pull_request.patch_url null | pull_request.merged_at null | closed_by.login stringclasses 346 values | closed_by.id float64 45.3k 183M ⌀ | closed_by.node_id stringclasses 346 values | closed_by.avatar_url stringclasses 346 values | closed_by.gravatar_id stringclasses 1 value | closed_by.url stringclasses 346 values | closed_by.html_url stringclasses 346 values | closed_by.followers_url stringclasses 346 values | closed_by.following_url stringclasses 346 values | closed_by.gists_url stringclasses 346 values | closed_by.starred_url stringclasses 346 values | closed_by.subscriptions_url stringclasses 346 values | closed_by.organizations_url stringclasses 346 values | closed_by.repos_url stringclasses 346 values | closed_by.events_url stringclasses 346 values | closed_by.received_events_url stringclasses 346 values | closed_by.type stringclasses 1 value | closed_by.user_view_type stringclasses 1 value | closed_by.site_admin bool 1 class | assignee.login stringclasses 52 values | assignee.id float64 192k 135M ⌀ | assignee.node_id stringclasses 52 values | assignee.avatar_url stringclasses 52 values | assignee.gravatar_id stringclasses 1 value | assignee.url stringclasses 52 values | assignee.html_url stringclasses 52 values | assignee.followers_url stringclasses 52 values | assignee.following_url stringclasses 52 values | assignee.gists_url stringclasses 52 values | assignee.starred_url stringclasses 52 values | assignee.subscriptions_url stringclasses 52 values | assignee.organizations_url stringclasses 52 values | assignee.repos_url stringclasses 52 values | assignee.events_url stringclasses 52 values | assignee.received_events_url stringclasses 52 values | assignee.type stringclasses 1 value | assignee.user_view_type stringclasses 1 value | assignee.site_admin bool 1 class | milestone.url stringclasses 1 value | milestone.html_url stringclasses 1 value | milestone.labels_url stringclasses 1 value | milestone.id float64 9.04M 9.04M ⌀ | milestone.node_id stringclasses 1 value | milestone.number float64 10 10 ⌀ | milestone.title stringclasses 1 value | milestone.description stringclasses 1 value | milestone.creator.login stringclasses 1 value | milestone.creator.id float64 47.5M 47.5M ⌀ | milestone.creator.node_id stringclasses 1 value | milestone.creator.avatar_url stringclasses 1 value | milestone.creator.gravatar_id stringclasses 1 value | milestone.creator.url stringclasses 1 value | milestone.creator.html_url stringclasses 1 value | milestone.creator.followers_url stringclasses 1 value | milestone.creator.following_url stringclasses 1 value | milestone.creator.gists_url stringclasses 1 value | milestone.creator.starred_url stringclasses 1 value | milestone.creator.subscriptions_url stringclasses 1 value | milestone.creator.organizations_url stringclasses 1 value | milestone.creator.repos_url stringclasses 1 value | milestone.creator.events_url stringclasses 1 value | milestone.creator.received_events_url stringclasses 1 value | milestone.creator.type stringclasses 1 value | milestone.creator.user_view_type stringclasses 1 value | milestone.creator.site_admin bool 1 class | milestone.open_issues float64 3 3 ⌀ | milestone.closed_issues float64 5 5 ⌀ | milestone.state stringclasses 1 value | milestone.created_at int64 1.68k 1.68k ⌀ | milestone.updated_at int64 1.72k 1.72k ⌀ | milestone.due_on null | milestone.closed_at null | is_pull_request bool 1 class | comments_text listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7190/comments | https://api.github.com/repos/huggingface/datasets/issues/7190/events | https://github.com/huggingface/datasets/issues/7190 | 2,562,162,725 | I_kwDODunzps6Yt4Al | 7,190 | Datasets conflicts with fsspec 2024.9 | [] | open | false | null | [] | null | 1 | 1,727 | 1,728 | null | NONE | null | null | ### Describe the bug
Installing both in latest versions are not possible
`pip install "datasets==3.0.1" "fsspec==2024.9.0"`
But using older version of datasets is ok
`pip install "datasets==1.24.4" "fsspec==2024.9.0"`
### Steps to reproduce the bug
`pip install "datasets==3.0.1" "fsspec==2024.9.0"`
### Expected behavior
install both versions.
### Environment info
debian 11.
python 3.10.15 | null | https://api.github.com/repos/huggingface/datasets/issues/7190/timeline | null | null | cw-igormorgado | 162,599,174 | U_kgDOCbERBg | https://avatars.githubusercontent.com/u/162599174?v=4 | https://api.github.com/users/cw-igormorgado | https://github.com/cw-igormorgado | https://api.github.com/users/cw-igormorgado/followers | https://api.github.com/users/cw-igormorgado/following{/other_user} | https://api.github.com/users/cw-igormorgado/gists{/gist_id} | https://api.github.com/users/cw-igormorgado/starred{/owner}{/repo} | https://api.github.com/users/cw-igormorgado/subscriptions | https://api.github.com/users/cw-igormorgado/orgs | https://api.github.com/users/cw-igormorgado/repos | https://api.github.com/users/cw-igormorgado/events{/privacy} | https://api.github.com/users/cw-igormorgado/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7190/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Yes, I need to use the latest version of fsspec and datasets for my usecase. \r\nhttps://github.com/fsspec/s3fs/pull/888#issuecomment-2404204606\r\nhttps://github.com/apache/arrow/issues/34363#issuecomment-2403553473\r\n\r\nlast version where things install without conflict is: 2.14.4\r\n\r\nSo this issue starts f... | |
https://api.github.com/repos/huggingface/datasets/issues/7189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7189/comments | https://api.github.com/repos/huggingface/datasets/issues/7189/events | https://github.com/huggingface/datasets/issues/7189 | 2,562,152,845 | I_kwDODunzps6Yt1mN | 7,189 | Audio preview in dataset viewer for audio array data without a path/filename | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 1,727 | 1,727 | null | NONE | null | null | ### Feature request
Huggingface has quite a comprehensive set of guides for [audio datasets](https://huggingface.co/docs/datasets/en/audio_dataset). It seems, however, all these guides assume the audio array data to be decoded/inserted into a HF dataset always originates from individual files. The [Audio-dataclass](https://github.com/huggingface/datasets/blob/3.0.1/src/datasets/features/audio.py#L20) appears designed with this assumption in mind. Looking at its source code it returns a dictionary with the keys `path`, `array` and `sampling_rate`.
However, sometimes users may have different pipelines where they themselves decode the audio array. This feature request has to do with wishing some clarification in guides on whether it is possible, and in such case how users can insert already decoded audio array data into datasets (pandas DataFrame, HF dataset or whatever) that are later saved as parquet, and still get a functioning audio preview in the dataset viewer.
Do I perhaps need to write a tempfile of my audio array slice to wav and capture the bytes object with `io.BytesIO` and pass that to `Audio()`?
### Motivation
I'm working with large audio datasets, and my pipeline reads (decodes) audio from larger files, and slices the relevant portions of audio from that larger file based on metadata I have available.
The pipeline is designed this way to avoid having to store multiple copies of data, and to avoid having to store tens of millions of small files.
I tried [test-uploading parquet files](https://huggingface.co/datasets/Lauler/riksdagen_test) where I store the audio array data of decoded slices of audio in an `audio` column with a dictionary with the keys `path`, `array` and `sampling_rate`. But I don't know the secret sauce of what the Huggingface Hub expects and requires to be able to display audio previews correctly.
### Your contribution
I could contribute a tool agnostic guide of creating HF audio datasets directly as parquet to the HF documentation if there is an interest. Provided you help me figure out the secret sauce of what the dataset viewer expects to display the preview correctly. | null | https://api.github.com/repos/huggingface/datasets/issues/7189/timeline | null | null | Lauler | 7,157,234 | MDQ6VXNlcjcxNTcyMzQ= | https://avatars.githubusercontent.com/u/7157234?v=4 | https://api.github.com/users/Lauler | https://github.com/Lauler | https://api.github.com/users/Lauler/followers | https://api.github.com/users/Lauler/following{/other_user} | https://api.github.com/users/Lauler/gists{/gist_id} | https://api.github.com/users/Lauler/starred{/owner}{/repo} | https://api.github.com/users/Lauler/subscriptions | https://api.github.com/users/Lauler/orgs | https://api.github.com/users/Lauler/repos | https://api.github.com/users/Lauler/events{/privacy} | https://api.github.com/users/Lauler/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7189/reactions | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |
https://api.github.com/repos/huggingface/datasets/issues/7187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7187/comments | https://api.github.com/repos/huggingface/datasets/issues/7187/events | https://github.com/huggingface/datasets/issues/7187 | 2,560,501,308 | I_kwDODunzps6YniY8 | 7,187 | shard_data_sources() got an unexpected keyword argument 'worker_id' | [] | open | false | null | [] | null | 0 | 1,727 | 1,727 | null | NONE | null | null | ### Describe the bug
```
[rank0]: File "/home/qinghao/miniconda3/envs/doremi/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 238, in __iter__
[rank0]: for key_example in islice(self.generate_examples_fn(**gen_kwags), shard_example_idx_start, None):
[rank0]: File "/home/qinghao/miniconda3/envs/doremi/lib/python3.10/site-packages/datasets/packaged_modules/generator/generator.py", line 32, in _generate_examples
[rank0]: for idx, ex in enumerate(self.config.generator(**gen_kwargs)):
[rank0]: File "/home/qinghao/workdir/doremi/doremi/dataloader.py", line 337, in take_data_generator
[rank0]: for ex in ds:
[rank0]: File "/home/qinghao/miniconda3/envs/doremi/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1791, in __iter__
[rank0]: yield from self._iter_pytorch()
[rank0]: File "/home/qinghao/miniconda3/envs/doremi/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1704, in _iter_pytorch
[rank0]: ex_iterable = ex_iterable.shard_data_sources(worker_id=worker_info.id, num_workers=worker_info.num_workers)
[rank0]: TypeError: UpdatableRandomlyCyclingMultiSourcesExamplesIterable.shard_data_sources() got an unexpected keyword argument 'worker_id'
```
### Steps to reproduce the bug
IterableDataset cannot use
### Expected behavior
can work on datasets==2.10, but will raise error for later versions.
### Environment info
datasets==3.0.1 | null | https://api.github.com/repos/huggingface/datasets/issues/7187/timeline | null | null | Qinghao-Hu | 27,758,466 | MDQ6VXNlcjI3NzU4NDY2 | https://avatars.githubusercontent.com/u/27758466?v=4 | https://api.github.com/users/Qinghao-Hu | https://github.com/Qinghao-Hu | https://api.github.com/users/Qinghao-Hu/followers | https://api.github.com/users/Qinghao-Hu/following{/other_user} | https://api.github.com/users/Qinghao-Hu/gists{/gist_id} | https://api.github.com/users/Qinghao-Hu/starred{/owner}{/repo} | https://api.github.com/users/Qinghao-Hu/subscriptions | https://api.github.com/users/Qinghao-Hu/orgs | https://api.github.com/users/Qinghao-Hu/repos | https://api.github.com/users/Qinghao-Hu/events{/privacy} | https://api.github.com/users/Qinghao-Hu/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7187/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |
https://api.github.com/repos/huggingface/datasets/issues/7186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7186/comments | https://api.github.com/repos/huggingface/datasets/issues/7186/events | https://github.com/huggingface/datasets/issues/7186 | 2,560,323,917 | I_kwDODunzps6Ym3FN | 7,186 | pinning `dill<0.3.9` without pinning `multiprocess` | [] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 1,727 | 1,727 | 1,727 | NONE | null | null | ### Describe the bug
The [latest `multiprocess` release](https://github.com/uqfoundation/multiprocess/releases/tag/0.70.17) requires `dill>=0.3.9` which causes issues when installing `datasets` without backtracking during package version resolution. Is it possible to add a pin for multiprocess so something like `multiprocess<=0.70.16` so that the `dill` version is compatible?
### Steps to reproduce the bug
NA
### Expected behavior
NA
### Environment info
NA | null | https://api.github.com/repos/huggingface/datasets/issues/7186/timeline | null | completed | shubhbapna | 38,372,682 | MDQ6VXNlcjM4MzcyNjgy | https://avatars.githubusercontent.com/u/38372682?v=4 | https://api.github.com/users/shubhbapna | https://github.com/shubhbapna | https://api.github.com/users/shubhbapna/followers | https://api.github.com/users/shubhbapna/following{/other_user} | https://api.github.com/users/shubhbapna/gists{/gist_id} | https://api.github.com/users/shubhbapna/starred{/owner}{/repo} | https://api.github.com/users/shubhbapna/subscriptions | https://api.github.com/users/shubhbapna/orgs | https://api.github.com/users/shubhbapna/repos | https://api.github.com/users/shubhbapna/events{/privacy} | https://api.github.com/users/shubhbapna/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7186/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |||
https://api.github.com/repos/huggingface/datasets/issues/7185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7185/comments | https://api.github.com/repos/huggingface/datasets/issues/7185/events | https://github.com/huggingface/datasets/issues/7185 | 2,558,508,748 | I_kwDODunzps6Yf77M | 7,185 | CI benchmarks are broken | [
{
"id": 4296013012,
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance",
"name": "maintenance",
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks"
}
] | closed | false | null | [] | null | 1 | 1,727 | 1,728 | 1,728 | MEMBER | null | null | Since Aug 30, 2024, CI benchmarks are broken: https://github.com/huggingface/datasets/actions/runs/11108421214/job/30861323975
```
{"level":"error","message":"Resource not accessible by integration","name":"HttpError","request":{"body":"{\"body\":\"<details>\\n<summary>Show benchmarks</summary>\\n\\nPyArrow==8.0.0\\n\\n<details>\\n<summary>Show updated benchmarks!</summary>\\n\\n### Benchmark: benchmark_array_xd.json\\n\\n| metric | read_batch_formatted_as_numpy after write_array2d |
...
"headers":{"accept":"application/vnd.github.v3+json","authorization":"token [REDACTED]","content-type":"application/json; charset=utf-8","user-agent":"octokit-rest.js/18.0.0 octokit-core.js/3.6.0 Node.js/16.20.2 (linux; x64)"},"method":"POST","request":{"agent":{"_events":{},"_eventsCount":2,"cache":
...
"response":{"data":{"documentation_url":"https://docs.github.com/rest/issues/comments#create-an-issue-comment","message":"Resource not accessible by integration","status":"403"},
...
"stack":"HttpError: Resource not accessible by integration\n at /usr/lib/node_modules/@dvcorg/cml/node_modules/@octokit/request/dist-node/index.js:86:21\n at processTicksAndRejections (node:internal/process/task_queues:96:5)\n at async Job.doExecute (/usr/lib/node_modules/@dvcorg/cml/node_modules/bottleneck/light.js:405:18)","status":403}
``` | null | https://api.github.com/repos/huggingface/datasets/issues/7185/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7185/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Fixed by #7205"
] | ||
https://api.github.com/repos/huggingface/datasets/issues/7183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7183/comments | https://api.github.com/repos/huggingface/datasets/issues/7183/events | https://github.com/huggingface/datasets/issues/7183 | 2,556,789,055 | I_kwDODunzps6YZYE_ | 7,183 | CI is broken for deps-latest | [] | closed | false | null | [] | null | 0 | 1,727 | 1,727 | 1,727 | MEMBER | null | null | See: https://github.com/huggingface/datasets/actions/runs/11106149906/job/30853879890
```
=========================== short test summary info ============================
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_filter_caching_on_disk - AssertionError: Lists differ: [{'fi[44 chars] {'filename': '/tmp/tmp6xcyyjs4/cache-9533fe2601cd3e48.arrow'}] != [{'fi[44 chars] {'filename': '/tmp/tmp6xcyyjs4/cache-e6e0a8b830976289.arrow'}]
First differing element 1:
{'filename': '/tmp/tmp6xcyyjs4/cache-9533fe2601cd3e48.arrow'}
{'filename': '/tmp/tmp6xcyyjs4/cache-e6e0a8b830976289.arrow'}
[{'filename': '/tmp/tmp6xcyyjs4/dataset0.arrow'},
- {'filename': '/tmp/tmp6xcyyjs4/cache-9533fe2601cd3e48.arrow'}]
? ^^^^^ --------
+ {'filename': '/tmp/tmp6xcyyjs4/cache-e6e0a8b830976289.arrow'}]
? ++++++++++ ^^ +
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_caching_on_disk - AssertionError: Lists differ: [{'filename': '/tmp/tmp5gxrti_n/cache-e58d327daec8626f.arrow'}] != [{'filename': '/tmp/tmp5gxrti_n/cache-d87234c5763e54a3.arrow'}]
First differing element 0:
{'filename': '/tmp/tmp5gxrti_n/cache-e58d327daec8626f.arrow'}
{'filename': '/tmp/tmp5gxrti_n/cache-d87234c5763e54a3.arrow'}
- [{'filename': '/tmp/tmp5gxrti_n/cache-e58d327daec8626f.arrow'}]
? ^^ -----------
+ [{'filename': '/tmp/tmp5gxrti_n/cache-d87234c5763e54a3.arrow'}]
? +++++++++++ ^^
FAILED tests/test_fingerprint.py::TokenizersHashTest::test_hash_regex - NameError: name 'log' is not defined
FAILED tests/test_fingerprint.py::RecurseHashTest::test_hash_ignores_line_definition_of_function - AssertionError: '52e56ee04ad92499' != '0a4f75cec280f634'
- 52e56ee04ad92499
+ 0a4f75cec280f634
FAILED tests/test_fingerprint.py::RecurseHashTest::test_hash_ipython_function - AssertionError: 'a6bd2041ca63d6c0' != '517bf36b7eecdef5'
- a6bd2041ca63d6c0
+ 517bf36b7eecdef5
FAILED tests/test_fingerprint.py::HashingTest::test_hash_tiktoken_encoding - NameError: name 'log' is not defined
FAILED tests/test_fingerprint.py::HashingTest::test_hash_torch_compiled_module - NameError: name 'log' is not defined
FAILED tests/test_fingerprint.py::HashingTest::test_hash_torch_generator - NameError: name 'log' is not defined
FAILED tests/test_fingerprint.py::HashingTest::test_hash_torch_tensor - NameError: name 'log' is not defined
FAILED tests/test_fingerprint.py::HashingTest::test_set_doesnt_depend_on_order - NameError: name 'log' is not defined
FAILED tests/test_fingerprint.py::HashingTest::test_set_stable - NameError: name 'log' is not defined
ERROR tests/test_iterable_dataset.py::test_iterable_dataset_from_file - NameError: name 'log' is not defined
= 11 failed, 2850 passed, 3 skipped, 23 warnings, 1 error in 191.06s (0:03:11) =
``` | null | https://api.github.com/repos/huggingface/datasets/issues/7183/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7183/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | ||
https://api.github.com/repos/huggingface/datasets/issues/7180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7180/comments | https://api.github.com/repos/huggingface/datasets/issues/7180/events | https://github.com/huggingface/datasets/issues/7180 | 2,554,244,750 | I_kwDODunzps6YPq6O | 7,180 | Memory leak when wrapping datasets into PyTorch Dataset without explicit deletion | [] | closed | false | null | [] | null | 1 | 1,727 | 1,727 | 1,727 | NONE | null | null | ### Describe the bug
I've encountered a memory leak when wrapping the HuggingFace dataset into a PyTorch Dataset. The RAM usage constantly increases during iteration if items are not explicitly deleted after use.
### Steps to reproduce the bug
Steps to reproduce:
Create a PyTorch Dataset wrapper for 'nebula/cc12m':
````
from torch.utils.data import Dataset
from tqdm import tqdm
from datasets import load_dataset
from torchvision import transforms
Image.MAX_IMAGE_PIXELS = None
class CC12M(Dataset):
def __init__(self, path_or_name='nebula/cc12m', split='train', transform=None, single_caption=True):
self.raw_dataset = load_dataset(path_or_name)[split]
if transform is None:
self.transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(
mean=[0.48145466, 0.4578275, 0.40821073],
std=[0.26862954, 0.26130258, 0.27577711]
)
])
else:
self.transform = transforms.Compose(transform)
self.single_caption = single_caption
self.length = len(self.raw_dataset)
def __len__(self):
return self.length
def __getitem__(self, index):
item = self.raw_dataset[index]
caption = item['txt']
with io.BytesIO(item['webp']) as buffer:
image = Image.open(buffer).convert('RGB')
if self.transform:
image = self.transform(image)
# del item # Uncomment this line to prevent the memory leak
return image, caption
````
Iterate through the dataset without the del item line in __getitem__.
Observe RAM usage increasing constantly.
Add del item at the end of __getitem__:
```
def __getitem__(self, index):
item = self.raw_dataset[index]
caption = item['txt']
with io.BytesIO(item['webp']) as buffer:
image = Image.open(buffer).convert('RGB')
if self.transform:
image = self.transform(image)
del item # This line prevents the memory leak
return image, caption
```
Iterate through the dataset again and observe that RAM usage remains stable.
### Expected behavior
Expected behavior:
RAM usage should remain stable during iteration without needing to explicitly delete items.
Actual behavior:
RAM usage constantly increases unless items are explicitly deleted after use
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-4.18.0-513.5.1.el8_9.x86_64-x86_64-with-glibc2.28
- Python version: 3.12.4
- `huggingface_hub` version: 0.24.6
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1
| null | https://api.github.com/repos/huggingface/datasets/issues/7180/timeline | null | completed | iamwangyabin | 38,123,329 | MDQ6VXNlcjM4MTIzMzI5 | https://avatars.githubusercontent.com/u/38123329?v=4 | https://api.github.com/users/iamwangyabin | https://github.com/iamwangyabin | https://api.github.com/users/iamwangyabin/followers | https://api.github.com/users/iamwangyabin/following{/other_user} | https://api.github.com/users/iamwangyabin/gists{/gist_id} | https://api.github.com/users/iamwangyabin/starred{/owner}{/repo} | https://api.github.com/users/iamwangyabin/subscriptions | https://api.github.com/users/iamwangyabin/orgs | https://api.github.com/users/iamwangyabin/repos | https://api.github.com/users/iamwangyabin/events{/privacy} | https://api.github.com/users/iamwangyabin/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7180/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | iamwangyabin | 38,123,329 | MDQ6VXNlcjM4MTIzMzI5 | https://avatars.githubusercontent.com/u/38123329?v=4 | https://api.github.com/users/iamwangyabin | https://github.com/iamwangyabin | https://api.github.com/users/iamwangyabin/followers | https://api.github.com/users/iamwangyabin/following{/other_user} | https://api.github.com/users/iamwangyabin/gists{/gist_id} | https://api.github.com/users/iamwangyabin/starred{/owner}{/repo} | https://api.github.com/users/iamwangyabin/subscriptions | https://api.github.com/users/iamwangyabin/orgs | https://api.github.com/users/iamwangyabin/repos | https://api.github.com/users/iamwangyabin/events{/privacy} | https://api.github.com/users/iamwangyabin/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"> I've encountered a memory leak when wrapping the HuggingFace dataset into a PyTorch Dataset. The RAM usage constantly increases during iteration if items are not explicitly deleted after use.\r\n\r\nDatasets are memory mapped so they work like SWAP memory. In particular as long as you have RAM available the data... | ||
https://api.github.com/repos/huggingface/datasets/issues/7178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7178/comments | https://api.github.com/repos/huggingface/datasets/issues/7178/events | https://github.com/huggingface/datasets/issues/7178 | 2,552,378,330 | I_kwDODunzps6YIjPa | 7,178 | Support Python 3.11 | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 1,727 | 1,728 | 1,728 | MEMBER | null | null | Support Python 3.11: https://peps.python.org/pep-0664/ | null | https://api.github.com/repos/huggingface/datasets/issues/7178/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7178/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |||
https://api.github.com/repos/huggingface/datasets/issues/7175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7175/comments | https://api.github.com/repos/huggingface/datasets/issues/7175/events | https://github.com/huggingface/datasets/issues/7175 | 2,550,957,337 | I_kwDODunzps6YDIUZ | 7,175 | [FSTimeoutError] load_dataset | [] | closed | false | null | [] | null | 7 | 1,727 | 1,738 | 1,727 | NONE | null | null | ### Describe the bug
When using `load_dataset`to load [HuggingFaceM4/VQAv2](https://huggingface.co/datasets/HuggingFaceM4/VQAv2), I am getting `FSTimeoutError`.
### Error
```
TimeoutError:
The above exception was the direct cause of the following exception:
FSTimeoutError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/fsspec/asyn.py](https://klh9mr78js-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20240924-060116_RC00_678132060#) in sync(loop, func, timeout, *args, **kwargs)
99 if isinstance(return_result, asyncio.TimeoutError):
100 # suppress asyncio.TimeoutError, raise FSTimeoutError
--> 101 raise FSTimeoutError from return_result
102 elif isinstance(return_result, BaseException):
103 raise return_result
FSTimeoutError:
```
It usually fails around 5-6 GB.
<img width="847" alt="Screenshot 2024-09-26 at 9 10 19 PM" src="https://github.com/user-attachments/assets/ff91995a-fb55-4de6-8214-94025d6c8470">
### Steps to reproduce the bug
To reproduce it, run this in colab notebook:
```
!pip install -q -U datasets
from datasets import load_dataset
ds = load_dataset('HuggingFaceM4/VQAv2', split="train[:10%]")
```
### Expected behavior
It should download properly.
### Environment info
Using Colab Notebook. | null | https://api.github.com/repos/huggingface/datasets/issues/7175/timeline | null | completed | cosmo3769 | 53,268,607 | MDQ6VXNlcjUzMjY4NjA3 | https://avatars.githubusercontent.com/u/53268607?v=4 | https://api.github.com/users/cosmo3769 | https://github.com/cosmo3769 | https://api.github.com/users/cosmo3769/followers | https://api.github.com/users/cosmo3769/following{/other_user} | https://api.github.com/users/cosmo3769/gists{/gist_id} | https://api.github.com/users/cosmo3769/starred{/owner}{/repo} | https://api.github.com/users/cosmo3769/subscriptions | https://api.github.com/users/cosmo3769/orgs | https://api.github.com/users/cosmo3769/repos | https://api.github.com/users/cosmo3769/events{/privacy} | https://api.github.com/users/cosmo3769/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7175/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | cosmo3769 | 53,268,607 | MDQ6VXNlcjUzMjY4NjA3 | https://avatars.githubusercontent.com/u/53268607?v=4 | https://api.github.com/users/cosmo3769 | https://github.com/cosmo3769 | https://api.github.com/users/cosmo3769/followers | https://api.github.com/users/cosmo3769/following{/other_user} | https://api.github.com/users/cosmo3769/gists{/gist_id} | https://api.github.com/users/cosmo3769/starred{/owner}{/repo} | https://api.github.com/users/cosmo3769/subscriptions | https://api.github.com/users/cosmo3769/orgs | https://api.github.com/users/cosmo3769/repos | https://api.github.com/users/cosmo3769/events{/privacy} | https://api.github.com/users/cosmo3769/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Is this `FSTimeoutError` due to download network issue from remote resource (from where it is being accessed)?",
"It seems to happen for all datasets, not just a specific one, and especially for versions after 3.0. (3.0.0, 3.0.1 have this problem)\r\n\r\nI had the same error on a different dataset, but after dow... | ||
https://api.github.com/repos/huggingface/datasets/issues/7171 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7171/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7171/comments | https://api.github.com/repos/huggingface/datasets/issues/7171/events | https://github.com/huggingface/datasets/issues/7171 | 2,549,738,919 | I_kwDODunzps6X-e2n | 7,171 | CI is broken: No solution found when resolving dependencies | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 1,727 | 1,727 | 1,727 | MEMBER | null | null | See: https://github.com/huggingface/datasets/actions/runs/11046967444/job/30687294297
```
Run uv pip install --system -r additional-tests-requirements.txt --no-deps
× No solution found when resolving dependencies:
╰─▶ Because the current Python version (3.8.18) does not satisfy Python>=3.9
and torchdata==0.10.0a0+1a98f21 depends on Python>=3.9, we can conclude
that torchdata==0.10.0a0+1a98f21 cannot be used.
And because only torchdata==0.10.0a0+1a98f21 is available and
you require torchdata, we can conclude that your requirements are
unsatisfiable.
Error: Process completed with exit code 1.
``` | null | https://api.github.com/repos/huggingface/datasets/issues/7171/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7171/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |||
https://api.github.com/repos/huggingface/datasets/issues/7169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7169/comments | https://api.github.com/repos/huggingface/datasets/issues/7169/events | https://github.com/huggingface/datasets/issues/7169 | 2,546,894,076 | I_kwDODunzps6XzoT8 | 7,169 | JSON lines with missing columns raise CastError | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 1,727 | 1,727 | 1,727 | MEMBER | null | null | JSON lines with missing columns raise CastError:
> CastError: Couldn't cast ... to ... because column names don't match
Related to:
- #7159
- #7161 | null | https://api.github.com/repos/huggingface/datasets/issues/7169/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7169/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |||
https://api.github.com/repos/huggingface/datasets/issues/7168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7168/comments | https://api.github.com/repos/huggingface/datasets/issues/7168/events | https://github.com/huggingface/datasets/issues/7168 | 2,546,710,631 | I_kwDODunzps6Xy7hn | 7,168 | sd1.5 diffusers controlnet training script gives new error | [] | closed | false | null | [] | null | 5 | 1,727 | 1,758 | 1,727 | NONE | null | null | ### Describe the bug
This will randomly pop up during training now
```
Traceback (most recent call last):
File "/workspace/diffusers/examples/controlnet/train_controlnet.py", line 1192, in <module>
main(args)
File "/workspace/diffusers/examples/controlnet/train_controlnet.py", line 1041, in main
for step, batch in enumerate(train_dataloader):
File "/usr/local/lib/python3.11/dist-packages/accelerate/data_loader.py", line 561, in __iter__
next_batch = next(dataloader_iter)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/utils/data/dataloader.py", line 630, in __next__
data = self._next_data()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/utils/data/dataloader.py", line 673, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/utils/data/_utils/fetch.py", line 50, in fetch
data = self.dataset.__getitems__(possibly_batched_index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 2746, in __getitems__
batch = self.__getitem__(keys)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 2742, in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 2727, in _getitem
formatted_output = format_table(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 639, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 407, in __call__
return self.format_batch(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 521, in format_batch
batch = self.python_features_decoder.decode_batch(batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 228, in decode_batch
return self.features.decode_batch(batch) if self.features else batch
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/features/features.py", line 2084, in decode_batch
[
File "/usr/local/lib/python3.11/dist-packages/datasets/features/features.py", line 2085, in <listcomp>
decode_nested_example(self[column_name], value, token_per_repo_id=token_per_repo_id)
File "/usr/local/lib/python3.11/dist-packages/datasets/features/features.py", line 1403, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/features/image.py", line 188, in decode_example
image.load() # to avoid "Too many open files" errors
```
### Steps to reproduce the bug
Train on diffusers sd1.5 controlnet example script
This will pop up randomly, you can see in wandb below when i manually resume run everytime this error appears

### Expected behavior
Training to continue without above error
### Environment info
- datasets version: 3.0.0
- Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- huggingface_hub version: 0.25.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- fsspec version: 2024.6.1
Training on 4090 | null | https://api.github.com/repos/huggingface/datasets/issues/7168/timeline | null | completed | Night1099 | 90,132,896 | MDQ6VXNlcjkwMTMyODk2 | https://avatars.githubusercontent.com/u/90132896?v=4 | https://api.github.com/users/Night1099 | https://github.com/Night1099 | https://api.github.com/users/Night1099/followers | https://api.github.com/users/Night1099/following{/other_user} | https://api.github.com/users/Night1099/gists{/gist_id} | https://api.github.com/users/Night1099/starred{/owner}{/repo} | https://api.github.com/users/Night1099/subscriptions | https://api.github.com/users/Night1099/orgs | https://api.github.com/users/Night1099/repos | https://api.github.com/users/Night1099/events{/privacy} | https://api.github.com/users/Night1099/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7168/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | Night1099 | 90,132,896 | MDQ6VXNlcjkwMTMyODk2 | https://avatars.githubusercontent.com/u/90132896?v=4 | https://api.github.com/users/Night1099 | https://github.com/Night1099 | https://api.github.com/users/Night1099/followers | https://api.github.com/users/Night1099/following{/other_user} | https://api.github.com/users/Night1099/gists{/gist_id} | https://api.github.com/users/Night1099/starred{/owner}{/repo} | https://api.github.com/users/Night1099/subscriptions | https://api.github.com/users/Night1099/orgs | https://api.github.com/users/Night1099/repos | https://api.github.com/users/Night1099/events{/privacy} | https://api.github.com/users/Night1099/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"not sure why the issue is formatting oddly",
"I guess this is a dupe of\r\n\r\nhttps://github.com/huggingface/datasets/issues/7071",
"this turned out to be because of a bad image in dataset",
"@Night1099 could you spiecify what exactly was wrong with your image in the dataset? I think im facing the same issu... | ||
https://api.github.com/repos/huggingface/datasets/issues/7167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7167/comments | https://api.github.com/repos/huggingface/datasets/issues/7167/events | https://github.com/huggingface/datasets/issues/7167 | 2,546,708,014 | I_kwDODunzps6Xy64u | 7,167 | Error Mapping on sd3, sdxl and upcoming flux controlnet training scripts in diffusers | [] | closed | false | null | [] | null | 1 | 1,727 | 1,727 | 1,727 | NONE | null | null | ### Describe the bug
```
Map: 6%|██████ | 8000/138120 [19:27<5:16:36, 6.85 examples/s]
Traceback (most recent call last):
File "/workspace/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1416, in <module>
main(args)
File "/workspace/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1132, in main
train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 560, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 3035, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 3461, in _map_single
writer.write_batch(batch)
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 567, in write_batch
self.write_table(pa_table, writer_batch_size)
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 579, in write_table
pa_table = pa_table.combine_chunks()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 4387, in pyarrow.lib.Table.combine_chunks
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
Traceback (most recent call last):
File "/usr/local/bin/accelerate", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/accelerate_cli.py", line 48, in main
args.func(args)
File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/launch.py", line 1174, in launch_command
simple_launcher(args)
File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/launch.py", line 769, in simple_launcher
```
### Steps to reproduce the bug
The dataset has no problem training on sd1.5 controlnet train script
### Expected behavior
Script not randomly erroing with error above
### Environment info
- `datasets` version: 3.0.0
- Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- `huggingface_hub` version: 0.25.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1
training on A100 | null | https://api.github.com/repos/huggingface/datasets/issues/7167/timeline | null | completed | Night1099 | 90,132,896 | MDQ6VXNlcjkwMTMyODk2 | https://avatars.githubusercontent.com/u/90132896?v=4 | https://api.github.com/users/Night1099 | https://github.com/Night1099 | https://api.github.com/users/Night1099/followers | https://api.github.com/users/Night1099/following{/other_user} | https://api.github.com/users/Night1099/gists{/gist_id} | https://api.github.com/users/Night1099/starred{/owner}{/repo} | https://api.github.com/users/Night1099/subscriptions | https://api.github.com/users/Night1099/orgs | https://api.github.com/users/Night1099/repos | https://api.github.com/users/Night1099/events{/privacy} | https://api.github.com/users/Night1099/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7167/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | Night1099 | 90,132,896 | MDQ6VXNlcjkwMTMyODk2 | https://avatars.githubusercontent.com/u/90132896?v=4 | https://api.github.com/users/Night1099 | https://github.com/Night1099 | https://api.github.com/users/Night1099/followers | https://api.github.com/users/Night1099/following{/other_user} | https://api.github.com/users/Night1099/gists{/gist_id} | https://api.github.com/users/Night1099/starred{/owner}{/repo} | https://api.github.com/users/Night1099/subscriptions | https://api.github.com/users/Night1099/orgs | https://api.github.com/users/Night1099/repos | https://api.github.com/users/Night1099/events{/privacy} | https://api.github.com/users/Night1099/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"this is happening on large datasets, if anyone happens upon this i was able to fix by changing\r\n\r\n```\r\ntrain_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)\r\n```\r\n\r\nto\r\n\r\n```\r\ntrain_dataset = train_dataset.map(compute_embeddings_fn, batched=True, ... | ||
https://api.github.com/repos/huggingface/datasets/issues/7164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7164/comments | https://api.github.com/repos/huggingface/datasets/issues/7164/events | https://github.com/huggingface/datasets/issues/7164 | 2,544,757,297 | I_kwDODunzps6Xreox | 7,164 | fsspec.exceptions.FSTimeoutError when downloading dataset | [] | closed | false | null | [] | null | 7 | 1,727 | 1,753 | 1,753 | NONE | null | null | ### Describe the bug
I am trying to download the `librispeech_asr` `clean` dataset, which results in a `FSTimeoutError` exception after downloading around 61% of the data.
### Steps to reproduce the bug
```
import datasets
datasets.load_dataset("librispeech_asr", "clean")
```
The output is as follows:
> Downloading data: 61%|██████████████▋ | 3.92G/6.39G [05:00<03:06, 13.2MB/s]Traceback (most recent call last):
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 56, in _runner
> result[0] = await coro
> ^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/implementations/http.py", line 262, in _get_file
> chunk = await r.content.read(chunk_size)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/streams.py", line 393, in read
> await self._wait("read")
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/streams.py", line 311, in _wait
> with self._timer:
> ^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/helpers.py", line 713, in __exit__
> raise asyncio.TimeoutError from None
> TimeoutError
>
> The above exception was the direct cause of the following exception:
>
> Traceback (most recent call last):
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/load_dataset.py", line 3, in <module>
> datasets.load_dataset("librispeech_asr", "clean")
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/load.py", line 2096, in load_dataset
> builder_instance.download_and_prepare(
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 924, in download_and_prepare
> self._download_and_prepare(
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 1647, in _download_and_prepare
> super()._download_and_prepare(
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 977, in _download_and_prepare
> split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/.cache/huggingface/modules/datasets_modules/datasets/librispeech_asr/2712a8f82f0d20807a56faadcd08734f9bdd24c850bb118ba21ff33ebff0432f/librispeech_asr.py", line 115, in _split_generators
> archive_path = dl_manager.download(_DL_URLS[self.config.name])
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 159, in download
> downloaded_path_or_paths = map_nested(
> ^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 512, in map_nested
> _single_map_nested((function, obj, batched, batch_size, types, None, True, None))
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 380, in _single_map_nested
> return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]
> ^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 216, in _download_batched
> self._download_single(url_or_filename, download_config=download_config)
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 225, in _download_single
> out = cached_path(url_or_filename, download_config=download_config)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 205, in cached_path
> output_path = get_from_cache(
> ^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 415, in get_from_cache
> fsspec_get(url, temp_file, storage_options=storage_options, desc=download_desc, disable_tqdm=disable_tqdm)
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 334, in fsspec_get
> fs.get_file(path, temp_file.name, callback=callback)
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 118, in wrapper
> return sync(self.loop, func, *args, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 101, in sync
> raise FSTimeoutError from return_result
> fsspec.exceptions.FSTimeoutError
> Downloading data: 61%|██████████████▋ | 3.92G/6.39G [05:00<03:09, 13.0MB/s]
### Expected behavior
Complete the download
### Environment info
Python version 3.12.6
Dependencies:
> dependencies = [
> "accelerate>=0.34.2",
> "datasets[audio]>=3.0.0",
> "ipython>=8.18.1",
> "librosa>=0.10.2.post1",
> "torch>=2.4.1",
> "torchaudio>=2.4.1",
> "transformers>=4.44.2",
> ]
MacOS 14.6.1 (23G93) | null | https://api.github.com/repos/huggingface/datasets/issues/7164/timeline | null | completed | timonmerk | 38,216,460 | MDQ6VXNlcjM4MjE2NDYw | https://avatars.githubusercontent.com/u/38216460?v=4 | https://api.github.com/users/timonmerk | https://github.com/timonmerk | https://api.github.com/users/timonmerk/followers | https://api.github.com/users/timonmerk/following{/other_user} | https://api.github.com/users/timonmerk/gists{/gist_id} | https://api.github.com/users/timonmerk/starred{/owner}{/repo} | https://api.github.com/users/timonmerk/subscriptions | https://api.github.com/users/timonmerk/orgs | https://api.github.com/users/timonmerk/repos | https://api.github.com/users/timonmerk/events{/privacy} | https://api.github.com/users/timonmerk/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7164/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | lhoestq | 42,851,186 | MDQ6VXNlcjQyODUxMTg2 | https://avatars.githubusercontent.com/u/42851186?v=4 | https://api.github.com/users/lhoestq | https://github.com/lhoestq | https://api.github.com/users/lhoestq/followers | https://api.github.com/users/lhoestq/following{/other_user} | https://api.github.com/users/lhoestq/gists{/gist_id} | https://api.github.com/users/lhoestq/starred{/owner}{/repo} | https://api.github.com/users/lhoestq/subscriptions | https://api.github.com/users/lhoestq/orgs | https://api.github.com/users/lhoestq/repos | https://api.github.com/users/lhoestq/events{/privacy} | https://api.github.com/users/lhoestq/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Hi ! If you check the dataset loading script [here](https://huggingface.co/datasets/openslr/librispeech_asr/blob/main/librispeech_asr.py) you'll see that it downloads the data from OpenSLR, and apparently their storage has timeout issues. It would be great to ultimately host the dataset on Hugging Face instead.\r\... | ||
https://api.github.com/repos/huggingface/datasets/issues/7163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7163/comments | https://api.github.com/repos/huggingface/datasets/issues/7163/events | https://github.com/huggingface/datasets/issues/7163 | 2,542,361,234 | I_kwDODunzps6XiVqS | 7,163 | Set explicit seed in iterable dataset ddp shuffling example | [] | closed | false | null | [] | null | 1 | 1,727 | 1,727 | 1,727 | CONTRIBUTOR | null | null | ### Describe the bug
In the examples section of the iterable dataset docs https://huggingface.co/docs/datasets/en/package_reference/main_classes#datasets.IterableDataset
the ddp example shuffles without seeding
```python
from datasets.distributed import split_dataset_by_node
ids = ds.to_iterable_dataset(num_shards=512)
ids = ids.shuffle(buffer_size=10_000) # will shuffle the shards order and use a shuffle buffer when you start iterating
ids = split_dataset_by_node(ds, world_size=8, rank=0) # will keep only 512 / 8 = 64 shards from the shuffled lists of shards when you start iterating
dataloader = torch.utils.data.DataLoader(ids, num_workers=4) # will assign 64 / 4 = 16 shards from this node's list of shards to each worker when you start iterating
for example in ids:
pass
```
This code would - I think - raise an error due to the lack of an explicit seed:
https://github.com/huggingface/datasets/blob/2eb4edb97e1a6af2ea62738ec58afbd3812fc66e/src/datasets/iterable_dataset.py#L1707-L1711
### Steps to reproduce the bug
Run example code
### Expected behavior
Add explicit seeding to example code
### Environment info
latest datasets | null | https://api.github.com/repos/huggingface/datasets/issues/7163/timeline | null | completed | alex-hh | 5,719,745 | MDQ6VXNlcjU3MTk3NDU= | https://avatars.githubusercontent.com/u/5719745?v=4 | https://api.github.com/users/alex-hh | https://github.com/alex-hh | https://api.github.com/users/alex-hh/followers | https://api.github.com/users/alex-hh/following{/other_user} | https://api.github.com/users/alex-hh/gists{/gist_id} | https://api.github.com/users/alex-hh/starred{/owner}{/repo} | https://api.github.com/users/alex-hh/subscriptions | https://api.github.com/users/alex-hh/orgs | https://api.github.com/users/alex-hh/repos | https://api.github.com/users/alex-hh/events{/privacy} | https://api.github.com/users/alex-hh/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7163/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | lhoestq | 42,851,186 | MDQ6VXNlcjQyODUxMTg2 | https://avatars.githubusercontent.com/u/42851186?v=4 | https://api.github.com/users/lhoestq | https://github.com/lhoestq | https://api.github.com/users/lhoestq/followers | https://api.github.com/users/lhoestq/following{/other_user} | https://api.github.com/users/lhoestq/gists{/gist_id} | https://api.github.com/users/lhoestq/starred{/owner}{/repo} | https://api.github.com/users/lhoestq/subscriptions | https://api.github.com/users/lhoestq/orgs | https://api.github.com/users/lhoestq/repos | https://api.github.com/users/lhoestq/events{/privacy} | https://api.github.com/users/lhoestq/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"thanks for reporting !"
] | ||
https://api.github.com/repos/huggingface/datasets/issues/7161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7161/comments | https://api.github.com/repos/huggingface/datasets/issues/7161/events | https://github.com/huggingface/datasets/issues/7161 | 2,541,971,931 | I_kwDODunzps6Xg2nb | 7,161 | JSON lines with empty struct raise ArrowTypeError | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 1,727 | 1,727 | 1,727 | MEMBER | null | null | JSON lines with empty struct raise ArrowTypeError: struct fields don't match or are in the wrong order
See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5
> ArrowTypeError: struct fields don't match or are in the wrong order: Input fields: struct<> output fields: struct<pov_count: int64, update_count: int64, citation_needed_count: int64>
Related to:
- #7159 | null | https://api.github.com/repos/huggingface/datasets/issues/7161/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7161/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |||
https://api.github.com/repos/huggingface/datasets/issues/7159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7159/comments | https://api.github.com/repos/huggingface/datasets/issues/7159/events | https://github.com/huggingface/datasets/issues/7159 | 2,541,865,613 | I_kwDODunzps6XgcqN | 7,159 | JSON lines with missing struct fields raise TypeError: Couldn't cast array | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 1 | 1,727 | 1,729 | 1,727 | MEMBER | null | null | JSON lines with missing struct fields raise TypeError: Couldn't cast array of type.
See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5
One would expect that the struct missing fields are added with null values. | null | https://api.github.com/repos/huggingface/datasets/issues/7159/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7159/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Hello,\r\n\r\nI have still the same issue when loading the dataset with the new version:\r\n[https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5](https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5)\r\n\r\nI have downloaded and unzipped the wikimedia/structured-wik... | |||
https://api.github.com/repos/huggingface/datasets/issues/7156 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7156/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7156/comments | https://api.github.com/repos/huggingface/datasets/issues/7156/events | https://github.com/huggingface/datasets/issues/7156 | 2,539,360,617 | I_kwDODunzps6XW5Fp | 7,156 | interleave_datasets resets shuffle state | [] | open | false | null | [] | null | 1 | 1,726 | 1,742 | null | NONE | null | null | ### Describe the bug
```
import datasets
import torch.utils.data
def gen(shards):
yield {"shards": shards}
def main():
dataset = datasets.IterableDataset.from_generator(
gen,
gen_kwargs={'shards': list(range(25))}
)
dataset = dataset.shuffle(buffer_size=1)
dataset = datasets.interleave_datasets(
[dataset, dataset], probabilities=[1, 0], stopping_strategy="all_exhausted"
)
dataloader = torch.utils.data.DataLoader(
dataset,
batch_size=8,
num_workers=8,
)
for i, batch in enumerate(dataloader):
print(batch)
if i >= 10:
break
if __name__ == "__main__":
main()
```
### Steps to reproduce the bug
Run the script, it will output
```
{'shards': [tensor([ 0, 8, 16, 24, 0, 8, 16, 24])]}
{'shards': [tensor([ 1, 9, 17, 1, 9, 17, 1, 9])]}
{'shards': [tensor([ 2, 10, 18, 2, 10, 18, 2, 10])]}
{'shards': [tensor([ 3, 11, 19, 3, 11, 19, 3, 11])]}
{'shards': [tensor([ 4, 12, 20, 4, 12, 20, 4, 12])]}
{'shards': [tensor([ 5, 13, 21, 5, 13, 21, 5, 13])]}
{'shards': [tensor([ 6, 14, 22, 6, 14, 22, 6, 14])]}
{'shards': [tensor([ 7, 15, 23, 7, 15, 23, 7, 15])]}
{'shards': [tensor([ 0, 8, 16, 24, 0, 8, 16, 24])]}
{'shards': [tensor([17, 1, 9, 17, 1, 9, 17, 1])]}
{'shards': [tensor([18, 2, 10, 18, 2, 10, 18, 2])]}
```
### Expected behavior
The shards should be shuffled.
### Environment info
- `datasets` version: 3.0.0
- Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.25.0
- PyArrow version: 17.0.0
- Pandas version: 2.0.3
- `fsspec` version: 2023.6.0 | null | https://api.github.com/repos/huggingface/datasets/issues/7156/timeline | null | null | jonathanasdf | 511,073 | MDQ6VXNlcjUxMTA3Mw== | https://avatars.githubusercontent.com/u/511073?v=4 | https://api.github.com/users/jonathanasdf | https://github.com/jonathanasdf | https://api.github.com/users/jonathanasdf/followers | https://api.github.com/users/jonathanasdf/following{/other_user} | https://api.github.com/users/jonathanasdf/gists{/gist_id} | https://api.github.com/users/jonathanasdf/starred{/owner}{/repo} | https://api.github.com/users/jonathanasdf/subscriptions | https://api.github.com/users/jonathanasdf/orgs | https://api.github.com/users/jonathanasdf/repos | https://api.github.com/users/jonathanasdf/events{/privacy} | https://api.github.com/users/jonathanasdf/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7156/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"It also does preserve `split_by_node`, so in the meantime you should call `shuffle` or `split_by_node` AFTER `interleave_datasets` or `concatenate_datasets`"
] | |
https://api.github.com/repos/huggingface/datasets/issues/7155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7155/comments | https://api.github.com/repos/huggingface/datasets/issues/7155/events | https://github.com/huggingface/datasets/issues/7155 | 2,533,641,870 | I_kwDODunzps6XBE6O | 7,155 | Dataset viewer not working! Failure due to more than 32 splits. | [] | closed | false | null | [] | null | 1 | 1,726 | 1,726 | 1,726 | NONE | null | null | Hello guys,
I have a dataset and I didn't know I couldn't upload more than 32 splits. Now, my dataset viewer is not working. I don't have the dataset locally on my node anymore and recreating would take a week. And I have to publish the dataset coming Monday. I read about the practice, how I can resolve it and avoid this issue in the future. But, at the moment I need a hard fix for two of my datasets.
And I don't want to mess or change anything and allow everyone in public to see the dataset and interact with it. Can you please help me?
https://huggingface.co/datasets/laion/Wikipedia-X
https://huggingface.co/datasets/laion/Wikipedia-X-Full | null | https://api.github.com/repos/huggingface/datasets/issues/7155/timeline | null | completed | sleepingcat4 | 81,933,585 | MDQ6VXNlcjgxOTMzNTg1 | https://avatars.githubusercontent.com/u/81933585?v=4 | https://api.github.com/users/sleepingcat4 | https://github.com/sleepingcat4 | https://api.github.com/users/sleepingcat4/followers | https://api.github.com/users/sleepingcat4/following{/other_user} | https://api.github.com/users/sleepingcat4/gists{/gist_id} | https://api.github.com/users/sleepingcat4/starred{/owner}{/repo} | https://api.github.com/users/sleepingcat4/subscriptions | https://api.github.com/users/sleepingcat4/orgs | https://api.github.com/users/sleepingcat4/repos | https://api.github.com/users/sleepingcat4/events{/privacy} | https://api.github.com/users/sleepingcat4/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7155/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | sleepingcat4 | 81,933,585 | MDQ6VXNlcjgxOTMzNTg1 | https://avatars.githubusercontent.com/u/81933585?v=4 | https://api.github.com/users/sleepingcat4 | https://github.com/sleepingcat4 | https://api.github.com/users/sleepingcat4/followers | https://api.github.com/users/sleepingcat4/following{/other_user} | https://api.github.com/users/sleepingcat4/gists{/gist_id} | https://api.github.com/users/sleepingcat4/starred{/owner}{/repo} | https://api.github.com/users/sleepingcat4/subscriptions | https://api.github.com/users/sleepingcat4/orgs | https://api.github.com/users/sleepingcat4/repos | https://api.github.com/users/sleepingcat4/events{/privacy} | https://api.github.com/users/sleepingcat4/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"I have fixed it! But I would appreciate a new feature wheere I could iterate over and see what each file looks like. "
] | ||
https://api.github.com/repos/huggingface/datasets/issues/7153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7153/comments | https://api.github.com/repos/huggingface/datasets/issues/7153/events | https://github.com/huggingface/datasets/issues/7153 | 2,532,788,555 | I_kwDODunzps6W90lL | 7,153 | Support data files with .ndjson extension | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 1,726 | 1,726 | 1,726 | MEMBER | null | null | ### Feature request
Support data files with `.ndjson` extension.
### Motivation
We already support data files with `.jsonl` extension.
### Your contribution
I am opening a PR. | null | https://api.github.com/repos/huggingface/datasets/issues/7153/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7153/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |||
https://api.github.com/repos/huggingface/datasets/issues/7150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7150/comments | https://api.github.com/repos/huggingface/datasets/issues/7150/events | https://github.com/huggingface/datasets/issues/7150 | 2,527,571,175 | I_kwDODunzps6Wp6zn | 7,150 | WebDataset loader splits keys differently than WebDataset library | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 1,726 | 1,726 | 1,726 | MEMBER | null | null | As reported by @ragavsachdeva (see discussion here: https://github.com/huggingface/datasets/pull/7144#issuecomment-2348307792), our webdataset loader is not aligned with the `webdataset` library when splitting keys from filenames.
For example, we get a different key splitting for filename `/some/path/22.0/1.1.png`:
- datasets library: `/some/path/22` and `0/1.1.png`
- webdataset library: `/some/path/22.0/1`, `1.png`
```python
import webdataset as wds
wds.tariterators.base_plus_ext("/some/path/22.0/1.1.png")
# ('/some/path/22.0/1', '1.png')
```
| null | https://api.github.com/repos/huggingface/datasets/issues/7150/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7150/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |||
https://api.github.com/repos/huggingface/datasets/issues/7149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7149/comments | https://api.github.com/repos/huggingface/datasets/issues/7149/events | https://github.com/huggingface/datasets/issues/7149 | 2,524,497,448 | I_kwDODunzps6WeMYo | 7,149 | Datasets Unknown Keyword Argument Error - task_templates | [] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 3 | 1,726 | 1,741 | 1,726 | NONE | null | null | ### Describe the bug
Issue
```python
from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)
```
Gives error
```
TypeError: DatasetInfo.__init__() got an unexpected keyword argument 'task_templates'
```
A simple downgrade to lower `datasets v 2.21.0` solves it.
### Steps to reproduce the bug
1. `pip install datsets`
2.
```python
from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)
```
### Expected behavior
Should load the dataset correctly.
### Environment info
- Datasets version `3.0.0`
- `transformers` version: 4.45.0.dev0
- Platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
- Python version: 3.12.4
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.5
- Accelerate version: 0.35.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
| null | https://api.github.com/repos/huggingface/datasets/issues/7149/timeline | null | completed | varungupta31 | 51,288,316 | MDQ6VXNlcjUxMjg4MzE2 | https://avatars.githubusercontent.com/u/51288316?v=4 | https://api.github.com/users/varungupta31 | https://github.com/varungupta31 | https://api.github.com/users/varungupta31/followers | https://api.github.com/users/varungupta31/following{/other_user} | https://api.github.com/users/varungupta31/gists{/gist_id} | https://api.github.com/users/varungupta31/starred{/owner}{/repo} | https://api.github.com/users/varungupta31/subscriptions | https://api.github.com/users/varungupta31/orgs | https://api.github.com/users/varungupta31/repos | https://api.github.com/users/varungupta31/events{/privacy} | https://api.github.com/users/varungupta31/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7149/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Thanks, for reporting.\r\n\r\nWe have been fixing most Hub datasets to remove the deprecated (and now non-supported) task templates, but we missed the \"facebook/winoground\".\r\n\r\nIt is fixed now: https://huggingface.co/datasets/facebook/winoground/discussions/8\r\n\r\n",
"Hello @albertvillanova \r\n\r\nI got... | |||
https://api.github.com/repos/huggingface/datasets/issues/7148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7148/comments | https://api.github.com/repos/huggingface/datasets/issues/7148/events | https://github.com/huggingface/datasets/issues/7148 | 2,523,833,413 | I_kwDODunzps6WbqRF | 7,148 | Bug: Error when downloading mteb/mtop_domain | [] | closed | false | null | [] | null | 4 | 1,726 | 1,726 | 1,726 | NONE | null | null | ### Describe the bug
When downloading the dataset "mteb/mtop_domain", ran into the following error:
```
Traceback (most recent call last):
File "/share/project/xzy/test/test_download.py", line 3, in <module>
data = load_dataset("mteb/mtop_domain", "en", trust_remote_code=True)
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2606, in load_dataset
builder_instance = load_dataset_builder(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2277, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1923, in dataset_module_factory
raise e1 from None
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1896, in dataset_module_factory
).get_module()
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1507, in get_module
local_path = self.download_loading_script()
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1467, in download_loading_script
return cached_path(file_path, download_config=download_config)
File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 211, in cached_path
output_path = get_from_cache(
File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 689, in get_from_cache
fsspec_get(
File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 395, in fsspec_get
fs.get_file(path, temp_file.name, callback=callback)
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 648, in get_file
http_get(
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 578, in http_get
raise EnvironmentError(
OSError: Consistency check failed: file should be of size 2191 but has size 2190 ((…)ets/mteb/mtop_domain@main/mtop_domain.py).
We are sorry for the inconvenience. Please retry with `force_download=True`.
If the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface_hub.
```
Try to download through HF datasets directly but got the same error as above.
```python
from datasets import load_dataset
data = load_dataset("mteb/mtop_domain", "en")
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
data = load_dataset("mteb/mtop_domain", "en", force_download=True)
```
With and without `force_download=True` both ran into the same error.
### Expected behavior
Should download the dataset successfully.
### Environment info
- datasets version: 2.21.0
- huggingface-hub version: 0.24.6 | null | https://api.github.com/repos/huggingface/datasets/issues/7148/timeline | null | completed | ZiyiXia | 77,958,037 | MDQ6VXNlcjc3OTU4MDM3 | https://avatars.githubusercontent.com/u/77958037?v=4 | https://api.github.com/users/ZiyiXia | https://github.com/ZiyiXia | https://api.github.com/users/ZiyiXia/followers | https://api.github.com/users/ZiyiXia/following{/other_user} | https://api.github.com/users/ZiyiXia/gists{/gist_id} | https://api.github.com/users/ZiyiXia/starred{/owner}{/repo} | https://api.github.com/users/ZiyiXia/subscriptions | https://api.github.com/users/ZiyiXia/orgs | https://api.github.com/users/ZiyiXia/repos | https://api.github.com/users/ZiyiXia/events{/privacy} | https://api.github.com/users/ZiyiXia/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7148/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Could you please try with `force_redownload` instead?\r\nEDIT:\r\n```python\r\ndata = load_dataset(\"mteb/mtop_domain\", \"en\", download_mode=\"force_redownload\")\r\n```",
"Seems the error is still there",
"I am not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\... | ||
https://api.github.com/repos/huggingface/datasets/issues/7147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7147/comments | https://api.github.com/repos/huggingface/datasets/issues/7147/events | https://github.com/huggingface/datasets/issues/7147 | 2,523,129,465 | I_kwDODunzps6WY-Z5 | 7,147 | IterableDataset strange deadlock | [] | closed | false | null | [] | null | 6 | 1,726 | 1,727 | 1,726 | NONE | null | null | ### Describe the bug
```
import datasets
import torch.utils.data
num_shards = 1024
def gen(shards):
for shard in shards:
if shard < 25:
yield {"shard": shard}
def main():
dataset = datasets.IterableDataset.from_generator(
gen,
gen_kwargs={"shards": list(range(num_shards))},
)
dataset = dataset.shuffle(buffer_size=1)
dataset = datasets.interleave_datasets(
[dataset, dataset], probabilities=[1, 0], stopping_strategy="all_exhausted"
)
dataset = dataset.shuffle(buffer_size=1)
dataloader = torch.utils.data.DataLoader(
dataset,
batch_size=8,
num_workers=8,
)
for i, batch in enumerate(dataloader):
print(batch)
if i >= 10:
break
print()
if __name__ == "__main__":
for _ in range(100):
main()
```
### Steps to reproduce the bug
Running the script above, at some point it will freeze.
- Changing `num_shards` from 1024 to 25 avoids the issue
- Commenting out the final shuffle avoids the issue
- Commenting out the interleave_datasets call avoids the issue
As an aside, if you comment out just the final shuffle, the output from interleave_datasets is not shuffled at all even though there's the shuffle before it. So something about that shuffle config is not being propagated to interleave_datasets.
### Expected behavior
The script should not freeze.
### Environment info
- `datasets` version: 3.0.0
- Platform: macOS-14.6.1-arm64-arm-64bit
- Python version: 3.12.5
- `huggingface_hub` version: 0.24.7
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1
I observed this with 2.21.0 initially, then tried upgrading to 3.0.0 and could still repro. | null | https://api.github.com/repos/huggingface/datasets/issues/7147/timeline | null | completed | jonathanasdf | 511,073 | MDQ6VXNlcjUxMTA3Mw== | https://avatars.githubusercontent.com/u/511073?v=4 | https://api.github.com/users/jonathanasdf | https://github.com/jonathanasdf | https://api.github.com/users/jonathanasdf/followers | https://api.github.com/users/jonathanasdf/following{/other_user} | https://api.github.com/users/jonathanasdf/gists{/gist_id} | https://api.github.com/users/jonathanasdf/starred{/owner}{/repo} | https://api.github.com/users/jonathanasdf/subscriptions | https://api.github.com/users/jonathanasdf/orgs | https://api.github.com/users/jonathanasdf/repos | https://api.github.com/users/jonathanasdf/events{/privacy} | https://api.github.com/users/jonathanasdf/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7147/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | jonathanasdf | 511,073 | MDQ6VXNlcjUxMTA3Mw== | https://avatars.githubusercontent.com/u/511073?v=4 | https://api.github.com/users/jonathanasdf | https://github.com/jonathanasdf | https://api.github.com/users/jonathanasdf/followers | https://api.github.com/users/jonathanasdf/following{/other_user} | https://api.github.com/users/jonathanasdf/gists{/gist_id} | https://api.github.com/users/jonathanasdf/starred{/owner}{/repo} | https://api.github.com/users/jonathanasdf/subscriptions | https://api.github.com/users/jonathanasdf/orgs | https://api.github.com/users/jonathanasdf/repos | https://api.github.com/users/jonathanasdf/events{/privacy} | https://api.github.com/users/jonathanasdf/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Yes `interleave_datasets` seems to have an issue with shuffling, could you open a new issue on this ?\r\n\r\nThen regarding the deadlock, it has to do with interleave_dataset with probabilities=[1, 0] with workers that may contain an empty dataset in first position (it can be empty since you distribute 1024 shard ... | ||
https://api.github.com/repos/huggingface/datasets/issues/7142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7142/comments | https://api.github.com/repos/huggingface/datasets/issues/7142/events | https://github.com/huggingface/datasets/issues/7142 | 2,512,244,938 | I_kwDODunzps6VvdDK | 7,142 | Specifying datatype when adding a column to a dataset. | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 1,725 | 1,726 | 1,726 | CONTRIBUTOR | null | null | ### Feature request
There should be a way to specify the datatype of a column in `datasets.add_column()`.
### Motivation
To specify a custom datatype, we have to use `datasets.add_column()` followed by `datasets.cast_column()` which is slow for large datasets. Another workaround is to pass a `numpy.array()` of desired type to the `datasets.add_column()` function.
IMO this functionality should be natively supported.
https://discuss.huggingface.co/t/add-column-with-a-particular-type-in-datasets/95674
### Your contribution
I can submit a PR for this. | null | https://api.github.com/repos/huggingface/datasets/issues/7142/timeline | null | completed | varadhbhatnagar | 20,443,618 | MDQ6VXNlcjIwNDQzNjE4 | https://avatars.githubusercontent.com/u/20443618?v=4 | https://api.github.com/users/varadhbhatnagar | https://github.com/varadhbhatnagar | https://api.github.com/users/varadhbhatnagar/followers | https://api.github.com/users/varadhbhatnagar/following{/other_user} | https://api.github.com/users/varadhbhatnagar/gists{/gist_id} | https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo} | https://api.github.com/users/varadhbhatnagar/subscriptions | https://api.github.com/users/varadhbhatnagar/orgs | https://api.github.com/users/varadhbhatnagar/repos | https://api.github.com/users/varadhbhatnagar/events{/privacy} | https://api.github.com/users/varadhbhatnagar/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7142/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | varadhbhatnagar | 20,443,618 | MDQ6VXNlcjIwNDQzNjE4 | https://avatars.githubusercontent.com/u/20443618?v=4 | https://api.github.com/users/varadhbhatnagar | https://github.com/varadhbhatnagar | https://api.github.com/users/varadhbhatnagar/followers | https://api.github.com/users/varadhbhatnagar/following{/other_user} | https://api.github.com/users/varadhbhatnagar/gists{/gist_id} | https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo} | https://api.github.com/users/varadhbhatnagar/subscriptions | https://api.github.com/users/varadhbhatnagar/orgs | https://api.github.com/users/varadhbhatnagar/repos | https://api.github.com/users/varadhbhatnagar/events{/privacy} | https://api.github.com/users/varadhbhatnagar/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"#self-assign"
] | ||
https://api.github.com/repos/huggingface/datasets/issues/7141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7141/comments | https://api.github.com/repos/huggingface/datasets/issues/7141/events | https://github.com/huggingface/datasets/issues/7141 | 2,510,797,653 | I_kwDODunzps6Vp7tV | 7,141 | Older datasets throwing safety errors with 2.21.0 | [] | closed | false | null | [] | null | 17 | 1,725 | 1,725 | 1,725 | NONE | null | null | ### Describe the bug
The dataset loading was throwing some safety errors for this popular dataset `wmt14`.
[in]:
```
import datasets
# train_data = datasets.load_dataset("wmt14", "de-en", split="train")
train_data = datasets.load_dataset("wmt14", "de-en", split="train")
val_data = datasets.load_dataset("wmt14", "de-en", split="validation[:10%]")
```
[out]:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[<ipython-input-9-445f0ecc4817>](https://localhost:8080/#) in <cell line: 4>()
2
3 # train_data = datasets.load_dataset("wmt14", "de-en", split="train")
----> 4 train_data = datasets.load_dataset("wmt14", "de-en", split="train")
5 val_data = datasets.load_dataset("wmt14", "de-en", split="validation[:10%]")
12 frames
[/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py](https://localhost:8080/#) in __init__(self, **kwargs)
636 if security is not None:
637 security = BlobSecurityInfo(
--> 638 safe=security["safe"], av_scan=security["avScan"], pickle_import_scan=security["pickleImportScan"]
639 )
640 self.security = security
KeyError: 'safe'
```
### Steps to reproduce the bug
See above.
### Expected behavior
Dataset properly loaded.
### Environment info
version: 2.21.0 | null | https://api.github.com/repos/huggingface/datasets/issues/7141/timeline | null | completed | alvations | 1,050,316 | MDQ6VXNlcjEwNTAzMTY= | https://avatars.githubusercontent.com/u/1050316?v=4 | https://api.github.com/users/alvations | https://github.com/alvations | https://api.github.com/users/alvations/followers | https://api.github.com/users/alvations/following{/other_user} | https://api.github.com/users/alvations/gists{/gist_id} | https://api.github.com/users/alvations/starred{/owner}{/repo} | https://api.github.com/users/alvations/subscriptions | https://api.github.com/users/alvations/orgs | https://api.github.com/users/alvations/repos | https://api.github.com/users/alvations/events{/privacy} | https://api.github.com/users/alvations/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7141/reactions | 29 | 26 | 0 | 0 | 0 | 3 | 0 | 0 | 0 | null | null | null | null | null | null | muellerzr | 7,831,895 | MDQ6VXNlcjc4MzE4OTU= | https://avatars.githubusercontent.com/u/7831895?v=4 | https://api.github.com/users/muellerzr | https://github.com/muellerzr | https://api.github.com/users/muellerzr/followers | https://api.github.com/users/muellerzr/following{/other_user} | https://api.github.com/users/muellerzr/gists{/gist_id} | https://api.github.com/users/muellerzr/starred{/owner}{/repo} | https://api.github.com/users/muellerzr/subscriptions | https://api.github.com/users/muellerzr/orgs | https://api.github.com/users/muellerzr/repos | https://api.github.com/users/muellerzr/events{/privacy} | https://api.github.com/users/muellerzr/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"I am also getting this error with this dataset: https://huggingface.co/datasets/google/IFEval",
"Me too, didn't have this issue few hours ago.",
"same observation. I even downgraded `datasets==2.20.0` and `huggingface_hub==0.23.5` leading me to believe it's an issue on the server.\r\n\r\nany known workarounds?... | ||
https://api.github.com/repos/huggingface/datasets/issues/7139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7139/comments | https://api.github.com/repos/huggingface/datasets/issues/7139/events | https://github.com/huggingface/datasets/issues/7139 | 2,508,078,858 | I_kwDODunzps6Vfj8K | 7,139 | Use load_dataset to load imagenet-1K But find a empty dataset | [] | open | false | null | [] | null | 2 | 1,725 | 1,728 | null | NONE | null | null | ### Describe the bug
```python
def get_dataset(data_path, train_folder="train", val_folder="val"):
traindir = os.path.join(data_path, train_folder)
valdir = os.path.join(data_path, val_folder)
def transform_val_examples(examples):
transform = Compose([
Resize(256),
CenterCrop(224),
ToTensor(),
])
examples["image"] = [transform(image.convert("RGB")) for image in examples["image"]]
return examples
def transform_train_examples(examples):
transform = Compose([
RandomResizedCrop(224),
RandomHorizontalFlip(),
ToTensor(),
])
examples["image"] = [transform(image.convert("RGB")) for image in examples["image"]]
return examples
# @fengsicheng: This way is very slow for big dataset like ImageNet-1K (but can pass the network problem using local dataset)
# train_set = load_dataset("imagefolder", data_dir=traindir, num_proc=4)
# test_set = load_dataset("imagefolder", data_dir=valdir, num_proc=4)
train_set = load_dataset("imagenet-1K", split="train", trust_remote_code=True)
test_set = load_dataset("imagenet-1K", split="test", trust_remote_code=True)
print(train_set["label"])
train_set.set_transform(transform_train_examples)
test_set.set_transform(transform_val_examples)
return train_set, test_set
```
above the code, but output of the print is a list of None:
<img width="952" alt="image" src="https://github.com/user-attachments/assets/c4e2fdd8-3b8f-481e-8f86-9bbeb49d79fb">
### Steps to reproduce the bug
1. just ran the code
2. see the print
### Expected behavior
I do not know how to fix this, can anyone provide help or something? It is hurry for me
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-5.4.0-190-generic-x86_64-with-glibc2.31
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.6
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1 | null | https://api.github.com/repos/huggingface/datasets/issues/7139/timeline | null | null | fscdc | 105,094,708 | U_kgDOBkOeNA | https://avatars.githubusercontent.com/u/105094708?v=4 | https://api.github.com/users/fscdc | https://github.com/fscdc | https://api.github.com/users/fscdc/followers | https://api.github.com/users/fscdc/following{/other_user} | https://api.github.com/users/fscdc/gists{/gist_id} | https://api.github.com/users/fscdc/starred{/owner}{/repo} | https://api.github.com/users/fscdc/subscriptions | https://api.github.com/users/fscdc/orgs | https://api.github.com/users/fscdc/repos | https://api.github.com/users/fscdc/events{/privacy} | https://api.github.com/users/fscdc/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7139/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Imagenet-1k is a gated dataset which means you’ll have to agree to share your contact info to access it. Have you tried this yet? Once you have, you can sign in with your user token (you can find this in your Hugging Face account settings) when prompted by running.\r\n\r\n```\r\nhuggingface-cli login\r\ntrain_set... | |
https://api.github.com/repos/huggingface/datasets/issues/7138 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7138/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7138/comments | https://api.github.com/repos/huggingface/datasets/issues/7138/events | https://github.com/huggingface/datasets/issues/7138 | 2,507,738,308 | I_kwDODunzps6VeQzE | 7,138 | Cache only changed columns? | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 2 | 1,725 | 1,726 | null | CONTRIBUTOR | null | null | ### Feature request
Cache only the actual changes to the dataset i.e. changed columns.
### Motivation
I realized that caching actually saves the complete dataset again.
This is especially problematic for image datasets if one wants to only change another column e.g. some metadata and then has to save 5 TB again.
### Your contribution
Is this even viable in the current architecture of the package?
I quickly looked into it and it seems it would require significant changes.
I would spend some time looking into this but maybe somebody could help with the feasibility and some plan to implement before spending too much time on it? | null | https://api.github.com/repos/huggingface/datasets/issues/7138/timeline | null | null | Modexus | 37,351,874 | MDQ6VXNlcjM3MzUxODc0 | https://avatars.githubusercontent.com/u/37351874?v=4 | https://api.github.com/users/Modexus | https://github.com/Modexus | https://api.github.com/users/Modexus/followers | https://api.github.com/users/Modexus/following{/other_user} | https://api.github.com/users/Modexus/gists{/gist_id} | https://api.github.com/users/Modexus/starred{/owner}{/repo} | https://api.github.com/users/Modexus/subscriptions | https://api.github.com/users/Modexus/orgs | https://api.github.com/users/Modexus/repos | https://api.github.com/users/Modexus/events{/privacy} | https://api.github.com/users/Modexus/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7138/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"so I guess a workaround to this is to simply remove all columns except the ones to cache and then add them back with `concatenate_datasets(..., axis=1)`.",
"yes this is the right workaround. We're keeping the cache like this to make it easier for people to delete intermediate cache files"
] | |
https://api.github.com/repos/huggingface/datasets/issues/7137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7137/comments | https://api.github.com/repos/huggingface/datasets/issues/7137/events | https://github.com/huggingface/datasets/issues/7137 | 2,506,851,048 | I_kwDODunzps6Va4Lo | 7,137 | [BUG] dataset_info sequence unexpected behavior in README.md YAML | [] | closed | false | null | [] | null | 3 | 1,725 | 1,751 | 1,751 | NONE | null | null | ### Describe the bug
When working on `dataset_info` yaml, I find my data column with format `list[dict[str, str]]` cannot be coded correctly.
My data looks like
```
{"answers":[{"text": "ADDRESS", "label": "abc"}]}
```
My `dataset_info` in README.md is:
```
dataset_info:
- config_name: default
features:
- name: answers
sequence:
- name: text
dtype: string
- name: label
dtype: string
```
**Error log**:
```
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from list<item: struct<text: string, label: string>> to struct using function cast_struct
```
## Potential Reason
After some analysis, it turns out that my yaml config is requiring `dict[str, list[str]]` instead of `list[dict[str, str]]`. It would work if I change my data to
```
{"answers":{"text": ["ADDRESS"], "label": ["abc", "def"]}}
```
These following 2 different `dataset_info` are actually equivalent.
```
dataset_info:
- config_name: default
features:
- name: answers
dtype:
- name: text
sequence: string
- name: label
sequence: string
dataset_info:
- config_name: default
features:
- name: answers
sequence:
- name: text
dtype: string
- name: label
dtype: string
```
### Steps to reproduce the bug
```
# README.md
---
dataset_info:
- config_name: default
features:
- name: answers
sequence:
- name: text
dtype: string
- name: label
dtype: string
configs:
- config_name: default
default: true
data_files:
- split: train
path:
- "test.jsonl"
---
# test.jsonl
# expected but not working
{"answers":[{"text": "ADDRESS", "label": "abc"}]}
# unexpected but working
{"answers":{"text": ["ADDRESS"], "label": ["abc", "def"]}}
```
### Expected behavior
```
dataset_info:
- config_name: default
features:
- name: answers
sequence:
- name: text
dtype: string
- name: label
dtype: string
```
Should work on following data format:
```
{"answers":[{"text":"ADDRESS", "label": "abc"}]}
```
### Environment info
- `datasets` version: 2.21.0
- Platform: macOS-14.6.1-arm64-arm-64bit
- Python version: 3.12.4
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1 | null | https://api.github.com/repos/huggingface/datasets/issues/7137/timeline | null | completed | ain-soph | 13,214,530 | MDQ6VXNlcjEzMjE0NTMw | https://avatars.githubusercontent.com/u/13214530?v=4 | https://api.github.com/users/ain-soph | https://github.com/ain-soph | https://api.github.com/users/ain-soph/followers | https://api.github.com/users/ain-soph/following{/other_user} | https://api.github.com/users/ain-soph/gists{/gist_id} | https://api.github.com/users/ain-soph/starred{/owner}{/repo} | https://api.github.com/users/ain-soph/subscriptions | https://api.github.com/users/ain-soph/orgs | https://api.github.com/users/ain-soph/repos | https://api.github.com/users/ain-soph/events{/privacy} | https://api.github.com/users/ain-soph/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7137/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | ain-soph | 13,214,530 | MDQ6VXNlcjEzMjE0NTMw | https://avatars.githubusercontent.com/u/13214530?v=4 | https://api.github.com/users/ain-soph | https://github.com/ain-soph | https://api.github.com/users/ain-soph/followers | https://api.github.com/users/ain-soph/following{/other_user} | https://api.github.com/users/ain-soph/gists{/gist_id} | https://api.github.com/users/ain-soph/starred{/owner}{/repo} | https://api.github.com/users/ain-soph/subscriptions | https://api.github.com/users/ain-soph/orgs | https://api.github.com/users/ain-soph/repos | https://api.github.com/users/ain-soph/events{/privacy} | https://api.github.com/users/ain-soph/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"The non-sequence case works well (`dict[str, str]` instead of `list[dict[str, str]]`), which makes me believe it shall be a bug for `sequence` and my proposed behavior shall be expected.\r\n```\r\ndataset_info:\r\n- config_name: default\r\n features:\r\n - name: answers\r\n dtype:\r\n - name: text\r\n ... | ||
https://api.github.com/repos/huggingface/datasets/issues/7135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7135/comments | https://api.github.com/repos/huggingface/datasets/issues/7135/events | https://github.com/huggingface/datasets/issues/7135 | 2,503,318,328 | I_kwDODunzps6VNZs4 | 7,135 | Bug: Type Mismatch in Dataset Mapping | [] | open | false | null | [] | null | 3 | 1,725 | 1,725 | null | NONE | null | null | # Issue: Type Mismatch in Dataset Mapping
## Description
There is an issue with the `map` function in the `datasets` library where the mapped output does not reflect the expected type change. After applying a mapping function to convert an integer label to a string, the resulting type remains an integer instead of a string.
## Reproduction Code
Below is a Python script that demonstrates the problem:
```python
from datasets import Dataset
# Original data
data = {
'text': ['Hello', 'world', 'this', 'is', 'a', 'test'],
'label': [0, 1, 0, 1, 1, 0]
}
# Creating a Dataset object
dataset = Dataset.from_dict(data)
# Mapping function to convert label to string
def add_one(example):
example['label'] = str(example['label'])
return example
# Applying the mapping function
dataset = dataset.map(add_one)
# Iterating over the dataset to show results
for item in dataset:
print(item)
print(type(item['label']))
```
## Expected Output
After applying the mapping function, the expected output should have the `label` field as strings:
```plaintext
{'text': 'Hello', 'label': '0'}
<class 'str'>
{'text': 'world', 'label': '1'}
<class 'str'>
{'text': 'this', 'label': '0'}
<class 'str'>
{'text': 'is', 'label': '1'}
<class 'str'>
{'text': 'a', 'label': '1'}
<class 'str'>
{'text': 'test', 'label': '0'}
<class 'str'>
```
## Actual Output
The actual output still shows the `label` field values as integers:
```plaintext
{'text': 'Hello', 'label': 0}
<class 'int'>
{'text': 'world', 'label': 1}
<class 'int'>
{'text': 'this', 'label': 0}
<class 'int'>
{'text': 'is', 'label': 1}
<class 'int'>
{'text': 'a', 'label': 1}
<class 'int'>
{'text': 'test', 'label': 0}
<class 'int'>
```
## Why necessary
In the case of Image process we often need to convert PIL to tensor with same column name.
Thank for every dev who review this issue. 🤗 | null | https://api.github.com/repos/huggingface/datasets/issues/7135/timeline | null | null | marko1616 | 45,327,989 | MDQ6VXNlcjQ1MzI3OTg5 | https://avatars.githubusercontent.com/u/45327989?v=4 | https://api.github.com/users/marko1616 | https://github.com/marko1616 | https://api.github.com/users/marko1616/followers | https://api.github.com/users/marko1616/following{/other_user} | https://api.github.com/users/marko1616/gists{/gist_id} | https://api.github.com/users/marko1616/starred{/owner}{/repo} | https://api.github.com/users/marko1616/subscriptions | https://api.github.com/users/marko1616/orgs | https://api.github.com/users/marko1616/repos | https://api.github.com/users/marko1616/events{/privacy} | https://api.github.com/users/marko1616/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7135/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"By the way, following code is working. This show the inconsistentcy.\r\n```python\r\nfrom datasets import Dataset\r\n\r\n# Original data\r\ndata = {\r\n 'text': ['Hello', 'world', 'this', 'is', 'a', 'test'],\r\n 'label': [0, 1, 0, 1, 1, 0]\r\n}\r\n\r\n# Creating a Dataset object\r\ndataset = Dataset.from_dic... | |
https://api.github.com/repos/huggingface/datasets/issues/7134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7134/comments | https://api.github.com/repos/huggingface/datasets/issues/7134/events | https://github.com/huggingface/datasets/issues/7134 | 2,499,484,041 | I_kwDODunzps6U-xmJ | 7,134 | Attempting to return a rank 3 grayscale image from dataset.map results in extreme slowdown | [] | open | false | null | [] | null | 0 | 1,725 | 1,725 | null | NONE | null | null | ### Describe the bug
Background: Digital images are often represented as a (Height, Width, Channel) tensor. This is the same for huggingface datasets that contain images. These images are loaded in Pillow containers which offer, for example, the `.convert` method.
I can convert an image from a (H,W,3) shape to a grayscale (H,W) image and I have no problems with this. But when attempting to return a (H,W,1) shaped matrix from a map function, it never completes and sometimes even results in an OOM from the OS.
I've used various methods to expand a (H,W) shaped array to a (H,W,1) array. But they all resulted in extremely long map operations consuming a lot of CPU and RAM.
### Steps to reproduce the bug
Below is a minimal example using two methods to get the desired output. Both of which don't work
```py
import tensorflow as tf
import datasets
import numpy as np
ds = datasets.load_dataset("project-sloth/captcha-images")
to_gray_pillow = lambda sample: {'image': np.expand_dims(sample['image'].convert("L"), axis=-1)}
ds_gray = ds.map(to_gray_pillow)
# Alternatively
ds = datasets.load_dataset("project-sloth/captcha-images").with_format("tensorflow")
to_gray_tf = lambda sample: {'image': tf.expand_dims(tf.image.rgb_to_grayscale(sample['image']), axis=-1)}
ds_gray = ds.map(to_gray_tf)
```
### Expected behavior
I expect the map operation to complete and return a new dataset containing grayscale images in a (H,W,1) shape.
### Environment info
datasets 2.21.0
python tested with both 3.11 and 3.12
host os : linux | null | https://api.github.com/repos/huggingface/datasets/issues/7134/timeline | null | null | navidmafi | 46,371,349 | MDQ6VXNlcjQ2MzcxMzQ5 | https://avatars.githubusercontent.com/u/46371349?v=4 | https://api.github.com/users/navidmafi | https://github.com/navidmafi | https://api.github.com/users/navidmafi/followers | https://api.github.com/users/navidmafi/following{/other_user} | https://api.github.com/users/navidmafi/gists{/gist_id} | https://api.github.com/users/navidmafi/starred{/owner}{/repo} | https://api.github.com/users/navidmafi/subscriptions | https://api.github.com/users/navidmafi/orgs | https://api.github.com/users/navidmafi/repos | https://api.github.com/users/navidmafi/events{/privacy} | https://api.github.com/users/navidmafi/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7134/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |
https://api.github.com/repos/huggingface/datasets/issues/7129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7129/comments | https://api.github.com/repos/huggingface/datasets/issues/7129/events | https://github.com/huggingface/datasets/issues/7129 | 2,491,942,650 | I_kwDODunzps6UiAb6 | 7,129 | Inconsistent output in documentation example: `num_classes` not displayed in `ClassLabel` output | [] | closed | false | null | [] | null | 0 | 1,724 | 1,733 | 1,733 | MEMBER | null | null | In the documentation for [ClassLabel](https://huggingface.co/docs/datasets/v2.21.0/en/package_reference/main_classes#datasets.ClassLabel), there is an example of usage with the following code:
````
from datasets import Features
features = Features({'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'])})
features
````
which expects to output (as stated in the documentation):
````
{'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'], id=None)}
````
but it generates the following
````
{'label': ClassLabel(names=['bad', 'ok', 'good'], id=None)}
````
If my understanding is correct, this happens because although num_classes is used during the init of the object, it is afterward ignored:
https://github.com/huggingface/datasets/blob/be5cff059a2a5b89d7a97bc04739c4919ab8089f/src/datasets/features/features.py#L975
I would like to work on this issue if this is something needed 😄
| null | https://api.github.com/repos/huggingface/datasets/issues/7129/timeline | null | completed | sergiopaniego | 17,179,696 | MDQ6VXNlcjE3MTc5Njk2 | https://avatars.githubusercontent.com/u/17179696?v=4 | https://api.github.com/users/sergiopaniego | https://github.com/sergiopaniego | https://api.github.com/users/sergiopaniego/followers | https://api.github.com/users/sergiopaniego/following{/other_user} | https://api.github.com/users/sergiopaniego/gists{/gist_id} | https://api.github.com/users/sergiopaniego/starred{/owner}{/repo} | https://api.github.com/users/sergiopaniego/subscriptions | https://api.github.com/users/sergiopaniego/orgs | https://api.github.com/users/sergiopaniego/repos | https://api.github.com/users/sergiopaniego/events{/privacy} | https://api.github.com/users/sergiopaniego/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7129/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | lhoestq | 42,851,186 | MDQ6VXNlcjQyODUxMTg2 | https://avatars.githubusercontent.com/u/42851186?v=4 | https://api.github.com/users/lhoestq | https://github.com/lhoestq | https://api.github.com/users/lhoestq/followers | https://api.github.com/users/lhoestq/following{/other_user} | https://api.github.com/users/lhoestq/gists{/gist_id} | https://api.github.com/users/lhoestq/starred{/owner}{/repo} | https://api.github.com/users/lhoestq/subscriptions | https://api.github.com/users/lhoestq/orgs | https://api.github.com/users/lhoestq/repos | https://api.github.com/users/lhoestq/events{/privacy} | https://api.github.com/users/lhoestq/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | ||
https://api.github.com/repos/huggingface/datasets/issues/7128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7128/comments | https://api.github.com/repos/huggingface/datasets/issues/7128/events | https://github.com/huggingface/datasets/issues/7128 | 2,490,274,775 | I_kwDODunzps6UbpPX | 7,128 | Filter Large Dataset Entry by Entry | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 4 | 1,724 | 1,728 | null | NONE | null | null | ### Feature request
I am not sure if this is a new feature, but I wanted to post this problem here, and hear if others have ways of optimizing and speeding up this process.
Let's say I have a really large dataset that I cannot load into memory. At this point, I am only aware of `streaming=True` to load the dataset. Now, the dataset consists of many tables. Ideally, I would want to have some simple filtering criterion, such that I only see the "good" tables. Here is an example of what the code might look like:
```
dataset = load_dataset(
"really-large-dataset",
streaming=True
)
# And let's say we process the dataset bit by bit because we want intermediate results
dataset = islice(dataset, 10000)
# Define a function to filter the data
def filter_function(table):
if some_condition:
return True
else:
return False
# Use the filter function on your dataset
filtered_dataset = (ex for ex in dataset if filter_function(ex))
```
And then I work on the processed dataset, which would be magnitudes faster than working on the original. I would love to hear if the problem setup + solution makes sense to people, and if anyone has suggestions!
### Motivation
See description above
### Your contribution
Happy to make PR if this is a new feature | null | https://api.github.com/repos/huggingface/datasets/issues/7128/timeline | null | null | QiyaoWei | 36,057,290 | MDQ6VXNlcjM2MDU3Mjkw | https://avatars.githubusercontent.com/u/36057290?v=4 | https://api.github.com/users/QiyaoWei | https://github.com/QiyaoWei | https://api.github.com/users/QiyaoWei/followers | https://api.github.com/users/QiyaoWei/following{/other_user} | https://api.github.com/users/QiyaoWei/gists{/gist_id} | https://api.github.com/users/QiyaoWei/starred{/owner}{/repo} | https://api.github.com/users/QiyaoWei/subscriptions | https://api.github.com/users/QiyaoWei/orgs | https://api.github.com/users/QiyaoWei/repos | https://api.github.com/users/QiyaoWei/events{/privacy} | https://api.github.com/users/QiyaoWei/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7128/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Hi ! you can do\r\n\r\n```python\r\nfiltered_dataset = dataset.filter(filter_function)\r\n```\r\n\r\non a subset:\r\n\r\n```python\r\nfiltered_subset = dataset.select(range(10_000)).filter(filter_function)\r\n```\r\n",
"Jumping on this as it seems relevant - when I use the `filter` method, it often results in an... | |
https://api.github.com/repos/huggingface/datasets/issues/7127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7127/comments | https://api.github.com/repos/huggingface/datasets/issues/7127/events | https://github.com/huggingface/datasets/issues/7127 | 2,486,524,966 | I_kwDODunzps6UNVwm | 7,127 | Caching shuffles by np.random.Generator results in unintiutive behavior | [] | open | false | null | [] | null | 2 | 1,724 | 1,753 | null | NONE | null | null | ### Describe the bug
Create a dataset. Save it to disk. Load from disk. Shuffle, usning a `np.random.Generator`. Iterate. Shuffle again. Iterate. The iterates are different since the supplied np.random.Generator has progressed between the shuffles.
Load dataset from disk again. Shuffle and Iterate. See same result as before. Shuffle and iterate, and this time it does not have the same shuffling as ion previous run.
The motivation is I have a deep learning loop with
```
for epoch in range(10):
for batch in dataset.shuffle(generator=generator).iter(batch_size=32):
.... # do stuff
```
where I want a new shuffling at every epoch. Instead I get the same shuffling.
### Steps to reproduce the bug
Run the code below two times.
```python
import datasets
import numpy as np
generator = np.random.default_rng(0)
ds = datasets.Dataset.from_dict(mapping={"X":range(1000)})
ds.save_to_disk("tmp")
print("First loop: ", end="")
for _ in range(10):
print(next(ds.shuffle(generator=generator).iter(batch_size=1))['X'], end=", ")
print("")
print("Second loop: ", end="")
ds = datasets.Dataset.load_from_disk("tmp")
for _ in range(10):
print(next(ds.shuffle(generator=generator).iter(batch_size=1))['X'], end=", ")
print("")
```
The output is:
```
$ python main.py
Saving the dataset (1/1 shards): 100%|███████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 495019.95 examples/s]
First loop: 459, 739, 72, 943, 241, 181, 845, 830, 896, 334,
Second loop: 741, 847, 944, 795, 483, 842, 717, 865, 231, 840,
$ python main.py
Saving the dataset (1/1 shards): 100%|████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 22243.40 examples/s]
First loop: 459, 739, 72, 943, 241, 181, 845, 830, 896, 334,
Second loop: 741, 741, 741, 741, 741, 741, 741, 741, 741, 741,
```
The second loop, on the second run, only spits out "741, 741, 741...." which is *not* the desired output
### Expected behavior
I want the dataset to shuffle at every epoch since I provide it with a generator for shuffling.
### Environment info
Datasets version 2.21.0
Ubuntu linux. | null | https://api.github.com/repos/huggingface/datasets/issues/7127/timeline | null | null | el-hult | 11,832,922 | MDQ6VXNlcjExODMyOTIy | https://avatars.githubusercontent.com/u/11832922?v=4 | https://api.github.com/users/el-hult | https://github.com/el-hult | https://api.github.com/users/el-hult/followers | https://api.github.com/users/el-hult/following{/other_user} | https://api.github.com/users/el-hult/gists{/gist_id} | https://api.github.com/users/el-hult/starred{/owner}{/repo} | https://api.github.com/users/el-hult/subscriptions | https://api.github.com/users/el-hult/orgs | https://api.github.com/users/el-hult/repos | https://api.github.com/users/el-hult/events{/privacy} | https://api.github.com/users/el-hult/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7127/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"I first thought this was a mistake of mine, and also posted on stack overflow. https://stackoverflow.com/questions/78913797/iterating-a-huggingface-dataset-from-disk-using-generator-seems-broken-how-to-d \r\n\r\nIt seems to me the issue is the caching step in \r\n\r\nhttps://github.com/huggingface/datasets/blob/be... | |
https://api.github.com/repos/huggingface/datasets/issues/7123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7123/comments | https://api.github.com/repos/huggingface/datasets/issues/7123/events | https://github.com/huggingface/datasets/issues/7123 | 2,484,003,937 | I_kwDODunzps6UDuRh | 7,123 | Make dataset viewer more flexible in displaying metadata alongside images | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 3 | 1,724 | 1,729 | null | NONE | null | null | ### Feature request
To display images with their associated metadata in the dataset viewer, a `metadata.csv` file is required. In the case of a dataset with multiple subsets, this would require the CSVs to be contained in the same folder as the images since they all need to be named `metadata.csv`. The request is that this be made more flexible for datasets with multiple subsets to avoid the need to put a `metadata.csv` into each image directory where they are not as easily accessed.
### Motivation
When creating datasets with multiple subsets I can't get the images to display alongside their associated metadata (it's usually one or the other that will show up). Since this requires a file specifically named `metadata.csv`, I then have to place that file within the image directory, which makes it much more difficult to access. Additionally, it still doesn't necessarily display the images alongside their metadata correctly (see, for instance, [this discussion](https://huggingface.co/datasets/imageomics/2018-NEON-beetles/discussions/8)).
It was suggested I bring this discussion to GitHub on another dataset struggling with a similar issue ([discussion](https://huggingface.co/datasets/imageomics/fish-vista/discussions/4)). In that case, it's a mix of data subsets, where some just reference the image URLs, while others actually have the images uploaded. The ones with images uploaded are not displaying images, but renaming that file to just `metadata.csv` would diminish the clarity of the construction of the dataset itself (and I'm not entirely convinced it would solve the issue).
### Your contribution
I can make a suggestion for one approach to address the issue:
For instance, even if it could just end in `_metadata.csv` or `-metadata.csv`, that would be very helpful to allow for more flexibility of dataset structure without impacting clarity. I would think that the functionality on the backend looking for `metadata.csv` could reasonably be adapted to look for such an ending on a filename (maybe also check that it has a `file_name` column?).
Presumably, requiring the `configs` in a setup like on [this dataset](https://huggingface.co/datasets/imageomics/rare-species/blob/main/README.md) could also help in figuring out how it should work?
```
configs:
- config_name: <image subset>
data_files:
- <image-metadata>.csv
- <path/to/images>/*.jpg
```
I'd also be happy to look at whatever solution is decided upon and contribute to the ideation.
Thanks for your time and consideration! The dataset viewer really is fabulous when it works :) | null | https://api.github.com/repos/huggingface/datasets/issues/7123/timeline | null | null | egrace479 | 38,985,481 | MDQ6VXNlcjM4OTg1NDgx | https://avatars.githubusercontent.com/u/38985481?v=4 | https://api.github.com/users/egrace479 | https://github.com/egrace479 | https://api.github.com/users/egrace479/followers | https://api.github.com/users/egrace479/following{/other_user} | https://api.github.com/users/egrace479/gists{/gist_id} | https://api.github.com/users/egrace479/starred{/owner}{/repo} | https://api.github.com/users/egrace479/subscriptions | https://api.github.com/users/egrace479/orgs | https://api.github.com/users/egrace479/repos | https://api.github.com/users/egrace479/events{/privacy} | https://api.github.com/users/egrace479/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7123/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Note that you can already have one directory per subset just for the metadata, e.g.\r\n\r\n```\r\nconfigs:\r\n - config_name: subset0\r\n data_files:\r\n - subset0/metadata.csv\r\n - images/*.jpg\r\n - config_name: subset1\r\n data_files:\r\n - subset1/metadata.csv\r\n - images/*.jpg\r\... | |
https://api.github.com/repos/huggingface/datasets/issues/7122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7122/comments | https://api.github.com/repos/huggingface/datasets/issues/7122/events | https://github.com/huggingface/datasets/issues/7122 | 2,482,491,258 | I_kwDODunzps6T9896 | 7,122 | [interleave_dataset] sample batches from a single source at a time | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 1,724 | 1,724 | null | NONE | null | null | ### Feature request
interleave_dataset and [RandomlyCyclingMultiSourcesExamplesIterable](https://github.com/huggingface/datasets/blob/3813ce846e52824b38e53895810682f0a496a2e3/src/datasets/iterable_dataset.py#L816) enable us to sample data examples from different sources. But can we also sample batches in a similar manner (each batch only contains data from a single source)?
### Motivation
Some recent research [[1](https://blog.salesforceairesearch.com/sfr-embedded-mistral/), [2](https://arxiv.org/pdf/2310.07554)] shows that source homogenous batching can be helpful for contrastive learning. Can we add a function called `RandomlyCyclingMultiSourcesBatchesIterable` to support this functionality?
### Your contribution
I can contribute a PR. But I wonder what the best way is to test its correctness and robustness. | null | https://api.github.com/repos/huggingface/datasets/issues/7122/timeline | null | null | memray | 4,197,249 | MDQ6VXNlcjQxOTcyNDk= | https://avatars.githubusercontent.com/u/4197249?v=4 | https://api.github.com/users/memray | https://github.com/memray | https://api.github.com/users/memray/followers | https://api.github.com/users/memray/following{/other_user} | https://api.github.com/users/memray/gists{/gist_id} | https://api.github.com/users/memray/starred{/owner}{/repo} | https://api.github.com/users/memray/subscriptions | https://api.github.com/users/memray/orgs | https://api.github.com/users/memray/repos | https://api.github.com/users/memray/events{/privacy} | https://api.github.com/users/memray/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7122/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |
https://api.github.com/repos/huggingface/datasets/issues/7117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7117/comments | https://api.github.com/repos/huggingface/datasets/issues/7117/events | https://github.com/huggingface/datasets/issues/7117 | 2,476,555,659 | I_kwDODunzps6TnT2L | 7,117 | Audio dataset load everything in RAM and is very slow | [] | open | false | null | [] | null | 3 | 1,724 | 1,724 | null | NONE | null | null | Hello, I'm working with an audio dataset. I want to transcribe the audio that the dataset contain, and for that I use whisper. My issue is that the dataset load everything in the RAM when I map the dataset, obviously, when RAM usage is too high, the program crashes.
To fix this issue, I'm using writer_batch_size that I set to 10, but in this case, the mapping of the dataset is extremely slow.
To illustrate this, on 50 examples, with `writer_batch_size` set to 10, it takes 123.24 seconds to process the dataset, but without `writer_batch_size` set to 10, it takes about ten seconds to process the dataset, but then the process remains blocked (I assume that it is writing the dataset and therefore suffers from the same problem as `writer_batch_size`)
### Steps to reproduce the bug
Hug ram usage but fast (but actually slow when saving the dataset):
```py
from datasets import load_dataset
import time
ds = load_dataset("WaveGenAI/audios2", split="train[:50]")
# map the dataset
def transcribe_audio(row):
audio = row["audio"] # get the audio but do nothing with it
row["transcribed"] = True
return row
time1 = time.time()
ds = ds.map(
transcribe_audio
)
for row in ds:
pass # do nothing, just iterate to trigger the map function
print(f"Time taken: {time.time() - time1:.2f} seconds")
```
Low ram usage but very very slow:
```py
from datasets import load_dataset
import time
ds = load_dataset("WaveGenAI/audios2", split="train[:50]")
# map the dataset
def transcribe_audio(row):
audio = row["audio"] # get the audio but do nothing with it
row["transcribed"] = True
return row
time1 = time.time()
ds = ds.map(
transcribe_audio, writer_batch_size=10
) # set low writer_batch_size to avoid memory issues
for row in ds:
pass # do nothing, just iterate to trigger the map function
print(f"Time taken: {time.time() - time1:.2f} seconds")
```
### Expected behavior
I think the processing should be much faster, on only 50 audio examples, the mapping takes several minutes while nothing is done (just loading the audio).
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-6.10.5-arch1-1-x86_64-with-glibc2.40
- Python version: 3.10.4
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2024.6.1
# Extra
The dataset has been generated by using audio folder, so I don't think anything specific in my code is causing this problem.
```py
import argparse
from datasets import load_dataset
parser = argparse.ArgumentParser()
parser.add_argument("--folder", help="folder path", default="/media/works/test/")
args = parser.parse_args()
dataset = load_dataset("audiofolder", data_dir=args.folder)
# push the dataset to hub
dataset.push_to_hub("WaveGenAI/audios")
```
Also, it's the combination of `audio = row["audio"]` and `row["transcribed"] = True` which causes problems, `row["transcribed"] = True `alone does nothing and `audio = row["audio"]` alone sometimes causes problems, sometimes not. | null | https://api.github.com/repos/huggingface/datasets/issues/7117/timeline | null | null | Jourdelune | 64,205,064 | MDQ6VXNlcjY0MjA1MDY0 | https://avatars.githubusercontent.com/u/64205064?v=4 | https://api.github.com/users/Jourdelune | https://github.com/Jourdelune | https://api.github.com/users/Jourdelune/followers | https://api.github.com/users/Jourdelune/following{/other_user} | https://api.github.com/users/Jourdelune/gists{/gist_id} | https://api.github.com/users/Jourdelune/starred{/owner}{/repo} | https://api.github.com/users/Jourdelune/subscriptions | https://api.github.com/users/Jourdelune/orgs | https://api.github.com/users/Jourdelune/repos | https://api.github.com/users/Jourdelune/events{/privacy} | https://api.github.com/users/Jourdelune/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7117/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Hi ! I think the issue comes from the fact that you return `row` entirely, and therefore the dataset has to re-encode the audio data in `row`.\r\n\r\nCan you try this instead ?\r\n\r\n```python\r\n# map the dataset\r\ndef transcribe_audio(row):\r\n audio = row[\"audio\"] # get the audio but do nothing with it\... | |
https://api.github.com/repos/huggingface/datasets/issues/7116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7116/comments | https://api.github.com/repos/huggingface/datasets/issues/7116/events | https://github.com/huggingface/datasets/issues/7116 | 2,475,522,721 | I_kwDODunzps6TjXqh | 7,116 | datasets cannot handle nested json if features is given. | [] | closed | false | null | [] | null | 3 | 1,724 | 1,725 | 1,725 | NONE | null | null | ### Describe the bug
I have a json named temp.json.
```json
{"ref1": "ABC", "ref2": "DEF", "cuts":[{"cut1": 3, "cut2": 5}]}
```
I want to load it.
```python
ds = datasets.load_dataset('json', data_files="./temp.json", features=datasets.Features({
'ref1': datasets.Value('string'),
'ref2': datasets.Value('string'),
'cuts': datasets.Sequence({
"cut1": datasets.Value("uint16"),
"cut2": datasets.Value("uint16")
})
}))
```
The above code does not work. However, I can load it without giving features.
```python
ds = datasets.load_dataset('json', data_files="./temp.json")
```
Is it possible to load integers as uint16 to save some memory?
### Steps to reproduce the bug
As in the bug description.
### Expected behavior
The data are loaded and integers are uint16.
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.21.0
- Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | null | https://api.github.com/repos/huggingface/datasets/issues/7116/timeline | null | completed | ljw20180420 | 38,550,511 | MDQ6VXNlcjM4NTUwNTEx | https://avatars.githubusercontent.com/u/38550511?v=4 | https://api.github.com/users/ljw20180420 | https://github.com/ljw20180420 | https://api.github.com/users/ljw20180420/followers | https://api.github.com/users/ljw20180420/following{/other_user} | https://api.github.com/users/ljw20180420/gists{/gist_id} | https://api.github.com/users/ljw20180420/starred{/owner}{/repo} | https://api.github.com/users/ljw20180420/subscriptions | https://api.github.com/users/ljw20180420/orgs | https://api.github.com/users/ljw20180420/repos | https://api.github.com/users/ljw20180420/events{/privacy} | https://api.github.com/users/ljw20180420/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7116/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | ljw20180420 | 38,550,511 | MDQ6VXNlcjM4NTUwNTEx | https://avatars.githubusercontent.com/u/38550511?v=4 | https://api.github.com/users/ljw20180420 | https://github.com/ljw20180420 | https://api.github.com/users/ljw20180420/followers | https://api.github.com/users/ljw20180420/following{/other_user} | https://api.github.com/users/ljw20180420/gists{/gist_id} | https://api.github.com/users/ljw20180420/starred{/owner}{/repo} | https://api.github.com/users/ljw20180420/subscriptions | https://api.github.com/users/ljw20180420/orgs | https://api.github.com/users/ljw20180420/repos | https://api.github.com/users/ljw20180420/events{/privacy} | https://api.github.com/users/ljw20180420/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Hi ! `Sequence` has a weird behavior for dictionaries (from tensorflow-datasets), use a regular list instead:\r\n\r\n```python\r\nds = datasets.load_dataset('json', data_files=\"./temp.json\", features=datasets.Features({\r\n 'ref1': datasets.Value('string'),\r\n 'ref2': datasets.Value('string'),\r\n 'cut... | ||
https://api.github.com/repos/huggingface/datasets/issues/7115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7115/comments | https://api.github.com/repos/huggingface/datasets/issues/7115/events | https://github.com/huggingface/datasets/issues/7115 | 2,475,363,142 | I_kwDODunzps6TiwtG | 7,115 | module 'pyarrow.lib' has no attribute 'ListViewType' | [] | closed | false | null | [] | null | 1 | 1,724 | 1,725 | 1,725 | NONE | null | null | ### Describe the bug
Code:
`!pipuninstall -y pyarrow
!pip install --no-cache-dir pyarrow
!pip uninstall -y pyarrow
!pip install pyarrow --no-cache-dir
!pip install --upgrade datasets transformers pyarrow
!pip install pyarrow.parquet
! pip install pyarrow-core libparquet
!pip install pyarrow --no-cache-dir
!pip install pyarrow
!pip install transformers
!pip install --upgrade datasets
!pip install datasets
! pip install pyarrow
! pip install pyarrow.lib
! pip install pyarrow.parquet
!pip install transformers
import pyarrow as pa
print(pa.__version__)
from datasets import load_dataset
import pyarrow.parquet as pq
import pyarrow.lib as lib
import pandas as pd
from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments
from datasets import load_dataset
from transformers import AutoTokenizer
! pip install pyarrow-core libparquet
# Load the dataset for content moderation
dataset = load_dataset("PolyAI/banking77") # Example dataset for customer support
# Initialize the tokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
# Tokenize the dataset
def tokenize_function(examples):
return tokenizer(examples['text'], padding="max_length", truncation=True)
# Apply tokenization to the entire dataset
tokenized_datasets = dataset.map(tokenize_function, batched=True)
# Check the first few tokenized samples
print(tokenized_datasets['train'][0])
from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments
# Load the model
model = AutoModelForSequenceClassification.from_pretrained("facebook/opt-350m", num_labels=77)
# Define training arguments
training_args = TrainingArguments(
output_dir="./results",
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=3,
eval_strategy="epoch", #
save_strategy="epoch",
logging_dir="./logs",
learning_rate=2e-5,
)
# Initialize the Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["test"],
)
# Train the model
trainer.train()
# Evaluate the model
trainer.evaluate()
`
AttributeError Traceback (most recent call last)
[<ipython-input-23-60bed3143a93>](https://localhost:8080/#) in <cell line: 22>()
20
21
---> 22 from datasets import load_dataset
23 import pyarrow.parquet as pq
24 import pyarrow.lib as lib
5 frames
[/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module>
15 __version__ = "2.21.0"
16
---> 17 from .arrow_dataset import Dataset
18 from .arrow_reader import ReadInstruction
19 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module>
74
75 from . import config
---> 76 from .arrow_reader import ArrowReader
77 from .arrow_writer import ArrowWriter, OptimizedTypedSequence
78 from .data_files import sanitize_patterns
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module>
27
28 import pyarrow as pa
---> 29 import pyarrow.parquet as pq
30 from tqdm.contrib.concurrent import thread_map
31
[/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/__init__.py](https://localhost:8080/#) in <module>
18 # flake8: noqa
19
---> 20 from .core import *
[/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py](https://localhost:8080/#) in <module>
31
32 try:
---> 33 import pyarrow._parquet as _parquet
34 except ImportError as exc:
35 raise ImportError(
/usr/local/lib/python3.10/dist-packages/pyarrow/_parquet.pyx in init pyarrow._parquet()
AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType'
### Steps to reproduce the bug
https://colab.research.google.com/drive/1HNbsg3tHxUJOHVtYIaRnNGY4T2PnLn4a?usp=sharing
### Expected behavior
Looks like there is an issue with datasets and pyarrow
### Environment info
google colab
python
huggingface
Found existing installation: pyarrow 17.0.0
Uninstalling pyarrow-17.0.0:
Successfully uninstalled pyarrow-17.0.0
Collecting pyarrow
Downloading pyarrow-17.0.0-cp310-cp310-manylinux_2_28_x86_64.whl.metadata (3.3 kB)
Requirement already satisfied: numpy>=1.16.6 in /usr/local/lib/python3.10/dist-packages (from pyarrow) (1.26.4)
Downloading pyarrow-17.0.0-cp310-cp310-manylinux_2_28_x86_64.whl (39.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 39.9/39.9 MB 188.9 MB/s eta 0:00:00
Installing collected packages: pyarrow
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
cudf-cu12 24.4.1 requires pyarrow<15.0.0a0,>=14.0.1, but you have pyarrow 17.0.0 which is incompatible.
ibis-framework 8.0.0 requires pyarrow<16,>=2, but you have pyarrow 17.0.0 which is incompatible.
Successfully installed pyarrow-17.0.0
WARNING: The following packages were previously imported in this runtime:
[pyarrow]
You must restart the runtime in order to use newly installed versions. | null | https://api.github.com/repos/huggingface/datasets/issues/7115/timeline | null | completed | neurafusionai | 175,128,880 | U_kgDOCnBBMA | https://avatars.githubusercontent.com/u/175128880?v=4 | https://api.github.com/users/neurafusionai | https://github.com/neurafusionai | https://api.github.com/users/neurafusionai/followers | https://api.github.com/users/neurafusionai/following{/other_user} | https://api.github.com/users/neurafusionai/gists{/gist_id} | https://api.github.com/users/neurafusionai/starred{/owner}{/repo} | https://api.github.com/users/neurafusionai/subscriptions | https://api.github.com/users/neurafusionai/orgs | https://api.github.com/users/neurafusionai/repos | https://api.github.com/users/neurafusionai/events{/privacy} | https://api.github.com/users/neurafusionai/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7115/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"https://github.com/neurafusionai/Hugging_Face/blob/main/meta_opt_350m_customer_support_lora_v1.ipynb\r\n\r\ncouldnt train because of GPU\r\nI didnt pip install datasets -U\r\nbut looks like restarting worked"
] | ||
https://api.github.com/repos/huggingface/datasets/issues/7113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7113/comments | https://api.github.com/repos/huggingface/datasets/issues/7113/events | https://github.com/huggingface/datasets/issues/7113 | 2,475,029,640 | I_kwDODunzps6ThfSI | 7,113 | Stream dataset does not iterate if the batch size is larger than the dataset size (related to drop_last_batch) | [] | closed | false | null | [] | null | 1 | 1,724 | 1,724 | 1,724 | NONE | null | null | ### Describe the bug
Hi there,
I use streaming and interleaving to combine multiple datasets saved in jsonl files. The size of dataset can vary (from 100ish to 100k-ish). I use dataset.map() and a big batch size to reduce the IO cost. It was working fine with datasets-2.16.1 but this problem shows up after I upgraded to datasets-2.19.2. With 2.21.0 the problem remains.
Please see the code below to reproduce the problem.
The dataset can iterate correctly if we set either streaming=False or drop_last_batch=False.
I have to use drop_last_batch=True since it's for distributed training.
### Steps to reproduce the bug
```python
# datasets==2.21.0
import datasets
def data_prepare(examples):
print(examples["sentence1"][0])
return examples
batch_size = 101
# the size of the dataset is 100
# the dataset iterates correctly if we set either streaming=False or drop_last_batch=False
dataset = datasets.load_dataset("mteb/biosses-sts", split="test", streaming=True)
dataset = dataset.map(lambda x: data_prepare(x),
drop_last_batch=True,
batched=True, batch_size=batch_size)
for ex in dataset:
print(ex)
pass
```
### Expected behavior
The dataset iterates regardless of the batch size.
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.2.0
| null | https://api.github.com/repos/huggingface/datasets/issues/7113/timeline | null | completed | memray | 4,197,249 | MDQ6VXNlcjQxOTcyNDk= | https://avatars.githubusercontent.com/u/4197249?v=4 | https://api.github.com/users/memray | https://github.com/memray | https://api.github.com/users/memray/followers | https://api.github.com/users/memray/following{/other_user} | https://api.github.com/users/memray/gists{/gist_id} | https://api.github.com/users/memray/starred{/owner}{/repo} | https://api.github.com/users/memray/subscriptions | https://api.github.com/users/memray/orgs | https://api.github.com/users/memray/repos | https://api.github.com/users/memray/events{/privacy} | https://api.github.com/users/memray/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7113/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"That's expected behavior, it's also the same in `torch`:\r\n\r\n```python\r\n>>> list(DataLoader(list(range(5)), batch_size=10, drop_last=True))\r\n[]\r\n```"
] | ||
https://api.github.com/repos/huggingface/datasets/issues/7112 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7112/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7112/comments | https://api.github.com/repos/huggingface/datasets/issues/7112/events | https://github.com/huggingface/datasets/issues/7112 | 2,475,004,644 | I_kwDODunzps6ThZLk | 7,112 | cudf-cu12 24.4.1, ibis-framework 8.0.0 requires pyarrow<15.0.0a0,>=14.0.1,pyarrow<16,>=2 and datasets 2.21.0 requires pyarrow>=15.0.0 | [] | open | false | null | [] | null | 2 | 1,724 | 1,726 | null | NONE | null | null | ### Describe the bug
!pip install accelerate>=0.16.0 torchvision transformers>=4.25.1 datasets>=2.19.1 ftfy tensorboard Jinja2 peft==0.7.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
cudf-cu12 24.4.1 requires pyarrow<15.0.0a0,>=14.0.1, but you have pyarrow 17.0.0 which is incompatible.
ibis-framework 8.0.0 requires pyarrow<16,>=2, but you have pyarrow 17.0.0 which is incompatible.
to solve above error
!pip install pyarrow==14.0.1
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
datasets 2.21.0 requires pyarrow>=15.0.0, but you have pyarrow 14.0.1 which is incompatible.
### Steps to reproduce the bug
!pip install datasets>=2.19.1
### Expected behavior
run without dependency error
### Environment info
Diffusers version: 0.31.0.dev0
Platform: Linux-6.1.85+-x86_64-with-glibc2.35
Running on Google Colab?: Yes
Python version: 3.10.12
PyTorch version (GPU?): 2.3.1+cu121 (True)
Flax version (CPU?/GPU?/TPU?): 0.8.4 (gpu)
Jax version: 0.4.26
JaxLib version: 0.4.26
Huggingface_hub version: 0.23.5
Transformers version: 4.42.4
Accelerate version: 0.32.1
PEFT version: 0.7.0
Bitsandbytes version: not installed
Safetensors version: 0.4.4
xFormers version: not installed
Accelerator: Tesla T4, 15360 MiB
Using GPU in script?:
Using distributed or parallel set-up in script?: | null | https://api.github.com/repos/huggingface/datasets/issues/7112/timeline | null | null | SoumyaMB10 | 174,590,283 | U_kgDOCmgJSw | https://avatars.githubusercontent.com/u/174590283?v=4 | https://api.github.com/users/SoumyaMB10 | https://github.com/SoumyaMB10 | https://api.github.com/users/SoumyaMB10/followers | https://api.github.com/users/SoumyaMB10/following{/other_user} | https://api.github.com/users/SoumyaMB10/gists{/gist_id} | https://api.github.com/users/SoumyaMB10/starred{/owner}{/repo} | https://api.github.com/users/SoumyaMB10/subscriptions | https://api.github.com/users/SoumyaMB10/orgs | https://api.github.com/users/SoumyaMB10/repos | https://api.github.com/users/SoumyaMB10/events{/privacy} | https://api.github.com/users/SoumyaMB10/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7112/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"@sayakpaul please advice ",
"Hits the same dependency conflict"
] | |
https://api.github.com/repos/huggingface/datasets/issues/7111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7111/comments | https://api.github.com/repos/huggingface/datasets/issues/7111/events | https://github.com/huggingface/datasets/issues/7111 | 2,474,915,845 | I_kwDODunzps6ThDgF | 7,111 | CI is broken for numpy-2: Failed to fetch wheel: llvmlite==0.34.0 | [] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 2 | 1,724 | 1,724 | 1,724 | MEMBER | null | null | Ci is broken with error `Failed to fetch wheel: llvmlite==0.34.0`: https://github.com/huggingface/datasets/actions/runs/10466825281/job/28984414269
```
Run uv pip install --system "datasets[tests_numpy2] @ ."
Resolved 150 packages in 4.42s
error: Failed to prepare distributions
Caused by: Failed to fetch wheel: llvmlite==0.34.0
Caused by: Build backend failed to build wheel through `build_wheel()` with exit status: 1
--- stdout:
running bdist_wheel
/home/runner/.cache/uv/builds-v0/.tmpcyKh8S/bin/python /home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py
LLVM version...
--- stderr:
Traceback (most recent call last):
File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 105, in main_posix
out = subprocess.check_output([llvm_config, '--version'])
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 503, in run
with Popen(*popenargs, **kwargs) as process:
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 1863, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'llvm-config'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 191, in <module>
main()
File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 181, in main
main_posix('linux', '.so')
File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 107, in main_posix
raise RuntimeError("%s failed executing, please point LLVM_CONFIG "
RuntimeError: llvm-config failed executing, please point LLVM_CONFIG to the path for llvm-config
error: command '/home/runner/.cache/uv/builds-v0/.tmpcyKh8S/bin/python' failed with exit code 1
``` | null | https://api.github.com/repos/huggingface/datasets/issues/7111/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7111/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Note that the CI before was using:\r\n- llvmlite: 0.43.0\r\n- numba: 0.60.0\r\n\r\nNow it tries to use:\r\n- llvmlite: 0.34.0\r\n- numba: 0.51.2",
"The issue is because numba-0.60.0 pins numpy<2.1 and `uv` tries to install latest numpy-2.1.0 with an old numba-0.51.0 version (and llvmlite-0.34.0). See discussion ... | |||
https://api.github.com/repos/huggingface/datasets/issues/7109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7109/comments | https://api.github.com/repos/huggingface/datasets/issues/7109/events | https://github.com/huggingface/datasets/issues/7109 | 2,473,367,848 | I_kwDODunzps6TbJko | 7,109 | ConnectionError for gated datasets and unauthenticated users | [] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 1,724 | 1,724 | 1,724 | MEMBER | null | null | Since the Hub returns dataset info for gated datasets and unauthenticated users, there is dead code: https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/load.py#L1846-L1852
We should remove the dead code and properly handle this case: currently we are raising a `ConnectionError` instead of a `DatasetNotFoundError` (as before).
See:
- https://github.com/huggingface/dataset-viewer/issues/3025
- https://github.com/huggingface/huggingface_hub/issues/2457 | null | https://api.github.com/repos/huggingface/datasets/issues/7109/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7109/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |||
https://api.github.com/repos/huggingface/datasets/issues/7108 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7108/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7108/comments | https://api.github.com/repos/huggingface/datasets/issues/7108/events | https://github.com/huggingface/datasets/issues/7108 | 2,470,665,327 | I_kwDODunzps6TQ1xv | 7,108 | website broken: Create a new dataset repository, doesn't create a new repo in Firefox | [] | closed | false | null | [] | null | 4 | 1,723 | 1,724 | 1,724 | NONE | null | null | ### Describe the bug
This issue is also reported here:
https://discuss.huggingface.co/t/create-a-new-dataset-repository-broken-page/102644
This page is broken.
https://huggingface.co/new-dataset
I fill in the form with my text, and click `Create Dataset`.

Then the form gets wiped. And no repo got created. No error message visible in the developer console.

# Idea for improvement
For better UX, if the repo cannot be created, then show an error message, that something went wrong.
# Work around, that works for me
```python
from huggingface_hub import HfApi, HfFolder
repo_id = 'simon-arc-solve-fractal-v3'
api = HfApi()
username = api.whoami()['name']
repo_url = api.create_repo(repo_id=repo_id, exist_ok=True, private=True, repo_type="dataset")
```
### Steps to reproduce the bug
Go https://huggingface.co/new-dataset
Fill in the form.
Click `Create dataset`.
Now the form is cleared. And the page doesn't jump anywhere.
### Expected behavior
The moment the user clicks `Create dataset`, the repo gets created and the page jumps to the created repo.
### Environment info
Firefox 128.0.3 (64-bit)
macOS Sonoma 14.5
| null | https://api.github.com/repos/huggingface/datasets/issues/7108/timeline | null | completed | neoneye | 147,971 | MDQ6VXNlcjE0Nzk3MQ== | https://avatars.githubusercontent.com/u/147971?v=4 | https://api.github.com/users/neoneye | https://github.com/neoneye | https://api.github.com/users/neoneye/followers | https://api.github.com/users/neoneye/following{/other_user} | https://api.github.com/users/neoneye/gists{/gist_id} | https://api.github.com/users/neoneye/starred{/owner}{/repo} | https://api.github.com/users/neoneye/subscriptions | https://api.github.com/users/neoneye/orgs | https://api.github.com/users/neoneye/repos | https://api.github.com/users/neoneye/events{/privacy} | https://api.github.com/users/neoneye/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7108/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | neoneye | 147,971 | MDQ6VXNlcjE0Nzk3MQ== | https://avatars.githubusercontent.com/u/147971?v=4 | https://api.github.com/users/neoneye | https://github.com/neoneye | https://api.github.com/users/neoneye/followers | https://api.github.com/users/neoneye/following{/other_user} | https://api.github.com/users/neoneye/gists{/gist_id} | https://api.github.com/users/neoneye/starred{/owner}{/repo} | https://api.github.com/users/neoneye/subscriptions | https://api.github.com/users/neoneye/orgs | https://api.github.com/users/neoneye/repos | https://api.github.com/users/neoneye/events{/privacy} | https://api.github.com/users/neoneye/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"I don't reproduce, I was able to create a new repo: https://huggingface.co/datasets/severo/reproduce-datasets-issues-7108. Can you confirm it's still broken?",
"I have just tried again.\r\n\r\nFirefox: The `Create dataset` doesn't work. It has worked in the past. It's my preferred browser.\r\n\r\nChrome: The `Cr... | ||
https://api.github.com/repos/huggingface/datasets/issues/7107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7107/comments | https://api.github.com/repos/huggingface/datasets/issues/7107/events | https://github.com/huggingface/datasets/issues/7107 | 2,470,444,732 | I_kwDODunzps6TP_68 | 7,107 | load_dataset broken in 2.21.0 | [] | closed | false | null | [] | null | 4 | 1,723 | 1,723 | 1,723 | NONE | null | null | ### Describe the bug
`eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)`
used to work till 2.20.0 but doesn't work in 2.21.0
In 2.20.0:

in 2.21.0:

### Steps to reproduce the bug
1. Spin up a new google collab
2. `pip install datasets==2.21.0`
3. `import datasets`
4. `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)`
5. Will throw an error.
### Expected behavior
Try steps 1-5 again but replace datasets version with 2.20.0, it will work
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.5
- PyArrow version: 17.0.0
- Pandas version: 2.1.4
- `fsspec` version: 2024.5.0
| null | https://api.github.com/repos/huggingface/datasets/issues/7107/timeline | null | completed | anjor | 1,911,631 | MDQ6VXNlcjE5MTE2MzE= | https://avatars.githubusercontent.com/u/1911631?v=4 | https://api.github.com/users/anjor | https://github.com/anjor | https://api.github.com/users/anjor/followers | https://api.github.com/users/anjor/following{/other_user} | https://api.github.com/users/anjor/gists{/gist_id} | https://api.github.com/users/anjor/starred{/owner}{/repo} | https://api.github.com/users/anjor/subscriptions | https://api.github.com/users/anjor/orgs | https://api.github.com/users/anjor/repos | https://api.github.com/users/anjor/events{/privacy} | https://api.github.com/users/anjor/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7107/reactions | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"There seems to be a PR related to the load_dataset path that went into 2.21.0 -- https://github.com/huggingface/datasets/pull/6862/files\r\n\r\nTaking a look at it now",
"+1\r\n\r\nDowngrading to 2.20.0 fixed my issue, hopefully helpful for others.",
"I tried adding a simple test to `test_load.py` with the alp... | ||
https://api.github.com/repos/huggingface/datasets/issues/7102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7102/comments | https://api.github.com/repos/huggingface/datasets/issues/7102/events | https://github.com/huggingface/datasets/issues/7102 | 2,466,893,106 | I_kwDODunzps6TCc0y | 7,102 | Slow iteration speeds when using IterableDataset.shuffle with load_dataset(data_files=..., streaming=True) | [] | open | false | null | [] | null | 2 | 1,723 | 1,723 | null | NONE | null | null | ### Describe the bug
When I load a dataset from a number of arrow files, as in:
```
random_dataset = load_dataset(
"arrow",
data_files={split: shard_filepaths},
streaming=True,
split=split,
)
```
I'm able to get fast iteration speeds when iterating over the dataset without shuffling.
When I shuffle the dataset, the iteration speed is reduced by ~1000x.
It's very possible the way I'm loading dataset shards is not appropriate; if so please advise!
Thanks for the help
### Steps to reproduce the bug
Here's full code to reproduce the issue:
- Generate a random dataset
- Create shards of data independently using Dataset.save_to_disk()
- The below will generate 16 shards (arrow files), of 512 examples each
```
import time
from pathlib import Path
from multiprocessing import Pool, cpu_count
import torch
from datasets import Dataset, load_dataset
split = "train"
split_save_dir = "/tmp/random_split"
def generate_random_example():
return {
'inputs': torch.randn(128).tolist(),
'indices': torch.randint(0, 10000, (2, 20000)).tolist(),
'values': torch.randn(20000).tolist(),
}
def generate_shard_dataset(examples_per_shard: int = 512):
dataset_dict = {
'inputs': [],
'indices': [],
'values': []
}
for _ in range(examples_per_shard):
example = generate_random_example()
dataset_dict['inputs'].append(example['inputs'])
dataset_dict['indices'].append(example['indices'])
dataset_dict['values'].append(example['values'])
return Dataset.from_dict(dataset_dict)
def save_shard(shard_idx, save_dir, examples_per_shard):
shard_dataset = generate_shard_dataset(examples_per_shard)
shard_write_path = Path(save_dir) / f"shard_{shard_idx}"
shard_dataset.save_to_disk(shard_write_path)
return str(Path(shard_write_path) / "data-00000-of-00001.arrow")
def generate_split_shards(save_dir, num_shards: int = 16, examples_per_shard: int = 512):
with Pool(cpu_count()) as pool:
args = [(m, save_dir, examples_per_shard) for m in range(num_shards)]
shard_filepaths = pool.starmap(save_shard, args)
return shard_filepaths
shard_filepaths = generate_split_shards(split_save_dir)
```
Load the dataset as IterableDataset:
```
random_dataset = load_dataset(
"arrow",
data_files={split: shard_filepaths},
streaming=True,
split=split,
)
random_dataset = random_dataset.with_format("numpy")
```
Observe the iterations/second when iterating over the dataset directly, and applying shuffling before iterating:
Without shuffling, this gives ~1500 iterations/second
```
start_time = time.time()
for count, item in enumerate(random_dataset):
if count > 0 and count % 100 == 0:
elapsed_time = time.time() - start_time
iterations_per_second = count / elapsed_time
print(f"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second")
```
```
Processed 100 items at an average of 705.74 iterations/second
Processed 200 items at an average of 1169.68 iterations/second
Processed 300 items at an average of 1497.97 iterations/second
Processed 400 items at an average of 1739.62 iterations/second
Processed 500 items at an average of 1931.11 iterations/second`
```
When shuffling, this gives ~3 iterations/second:
```
random_dataset = random_dataset.shuffle(buffer_size=100,seed=42)
start_time = time.time()
for count, item in enumerate(random_dataset):
if count > 0 and count % 100 == 0:
elapsed_time = time.time() - start_time
iterations_per_second = count / elapsed_time
print(f"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second")
```
```
Processed 100 items at an average of 3.75 iterations/second
Processed 200 items at an average of 3.93 iterations/second
```
### Expected behavior
Iterations per second should be barely affected by shuffling, especially with a small buffer size
### Environment info
Datasets version: 2.21.0
Python 3.10
Ubuntu 22.04 | null | https://api.github.com/repos/huggingface/datasets/issues/7102/timeline | null | null | lajd | 13,192,126 | MDQ6VXNlcjEzMTkyMTI2 | https://avatars.githubusercontent.com/u/13192126?v=4 | https://api.github.com/users/lajd | https://github.com/lajd | https://api.github.com/users/lajd/followers | https://api.github.com/users/lajd/following{/other_user} | https://api.github.com/users/lajd/gists{/gist_id} | https://api.github.com/users/lajd/starred{/owner}{/repo} | https://api.github.com/users/lajd/subscriptions | https://api.github.com/users/lajd/orgs | https://api.github.com/users/lajd/repos | https://api.github.com/users/lajd/events{/privacy} | https://api.github.com/users/lajd/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7102/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Hi @lajd , I was skeptical about how we are saving the shards each as their own dataset (arrow file) in the script above, and so I updated the script to try out saving the shards in a few different file formats. From the experiments I ran, I saw binary format show significantly the best performance, with arrow a... | |
https://api.github.com/repos/huggingface/datasets/issues/7101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7101/comments | https://api.github.com/repos/huggingface/datasets/issues/7101/events | https://github.com/huggingface/datasets/issues/7101 | 2,466,510,783 | I_kwDODunzps6TA_e_ | 7,101 | `load_dataset` from Hub with `name` to specify `config` using incorrect builder type when multiple data formats are present | [] | open | false | null | [] | null | 1 | 1,723 | 1,723 | null | NONE | null | null | Following [documentation](https://huggingface.co/docs/datasets/repository_structure#define-your-splits-and-subsets-in-yaml) I had defined different configs for [`Dataception`](https://huggingface.co/datasets/bigdata-pw/Dataception), a dataset of datasets:
```yaml
configs:
- config_name: dataception
data_files:
- path: dataception.parquet
split: train
default: true
- config_name: dataset_5423
data_files:
- path: datasets/5423.tar
split: train
...
- config_name: dataset_721736
data_files:
- path: datasets/721736.tar
split: train
```
The intent was for metadata to be browsable via Dataset Viewer, in addition to each individual dataset, and to allow datasets to be loaded by specifying the config/name to `load_dataset`.
While testing `load_dataset` I encountered the following error:
```python
>>> dataset = load_dataset("bigdata-pw/Dataception", "dataset_7691")
Downloading readme: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 467k/467k [00:00<00:00, 1.99MB/s]
Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 71.0M/71.0M [00:02<00:00, 26.8MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "datasets\load.py", line 2145, in load_dataset
builder_instance.download_and_prepare(
File "datasets\builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "datasets\builder.py", line 1100, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "datasets\packaged_modules\parquet\parquet.py", line 58, in _split_generators
self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f))
^^^^^^^^^^^^^^^^^
File "pyarrow\parquet\core.py", line 2325, in read_schema
file = ParquetFile(
^^^^^^^^^^^^
File "pyarrow\parquet\core.py", line 318, in __init__
self.reader.open(
File "pyarrow\_parquet.pyx", line 1470, in pyarrow._parquet.ParquetReader.open
File "pyarrow\error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
```
The correct file is downloaded, however the incorrect builder type is detected; `parquet` due to other content of the repository. It would appear that the config needs to be taken into account.
Note that I have removed the additional configs from the repository because of this issue and there is a limit of 3000 configs anyway so the Dataset Viewer doesn't work as I intended. I'll add them back in if it assists with testing.
| null | https://api.github.com/repos/huggingface/datasets/issues/7101/timeline | null | null | hlky | 106,811,348 | U_kgDOBl3P1A | https://avatars.githubusercontent.com/u/106811348?v=4 | https://api.github.com/users/hlky | https://github.com/hlky | https://api.github.com/users/hlky/followers | https://api.github.com/users/hlky/following{/other_user} | https://api.github.com/users/hlky/gists{/gist_id} | https://api.github.com/users/hlky/starred{/owner}{/repo} | https://api.github.com/users/hlky/subscriptions | https://api.github.com/users/hlky/orgs | https://api.github.com/users/hlky/repos | https://api.github.com/users/hlky/events{/privacy} | https://api.github.com/users/hlky/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7101/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Having looked into this further it seems the core of the issue is with two different formats in the same repo.\r\n\r\nWhen the `parquet` config is first, the `WebDataset`s are loaded as `parquet`, if the `WebDataset` configs are first, the `parquet` is loaded as `WebDataset`.\r\n\r\nA workaround in my case would b... | |
https://api.github.com/repos/huggingface/datasets/issues/7100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7100/comments | https://api.github.com/repos/huggingface/datasets/issues/7100/events | https://github.com/huggingface/datasets/issues/7100 | 2,465,529,414 | I_kwDODunzps6S9P5G | 7,100 | IterableDataset: cannot resolve features from list of numpy arrays | [] | open | false | null | [] | null | 1 | 1,723 | 1,727 | null | NONE | null | null | ### Describe the bug
when resolve features of `IterableDataset`, got `pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values` error.
```
Traceback (most recent call last):
File "test.py", line 6
iter_ds = iter_ds._resolve_features()
File "lib/python3.10/site-packages/datasets/iterable_dataset.py", line 2876, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
File "lib/python3.10/site-packages/datasets/iterable_dataset.py", line 63, in _infer_features_from_batch
pa_table = pa.Table.from_pydict(batch)
File "pyarrow/table.pxi", line 1813, in pyarrow.lib._Tabular.from_pydict
File "pyarrow/table.pxi", line 5339, in pyarrow.lib._from_pydict
File "pyarrow/array.pxi", line 374, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 344, in pyarrow.lib.array
File "pyarrow/array.pxi", line 42, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values
```
### Steps to reproduce the bug
```python
from datasets import Dataset
import numpy as np
# create list of numpy
iter_ds = Dataset.from_dict({'a': [[[1, 2, 3], [1, 2, 3]]]}).to_iterable_dataset().map(lambda x: {'a': [np.array(x['a'])]})
iter_ds = iter_ds._resolve_features() # errors here
```
### Expected behavior
features can be successfully resolved
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.23.4
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | null | https://api.github.com/repos/huggingface/datasets/issues/7100/timeline | null | null | VeryLazyBoy | 18,899,212 | MDQ6VXNlcjE4ODk5MjEy | https://avatars.githubusercontent.com/u/18899212?v=4 | https://api.github.com/users/VeryLazyBoy | https://github.com/VeryLazyBoy | https://api.github.com/users/VeryLazyBoy/followers | https://api.github.com/users/VeryLazyBoy/following{/other_user} | https://api.github.com/users/VeryLazyBoy/gists{/gist_id} | https://api.github.com/users/VeryLazyBoy/starred{/owner}{/repo} | https://api.github.com/users/VeryLazyBoy/subscriptions | https://api.github.com/users/VeryLazyBoy/orgs | https://api.github.com/users/VeryLazyBoy/repos | https://api.github.com/users/VeryLazyBoy/events{/privacy} | https://api.github.com/users/VeryLazyBoy/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7100/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Assign this issue to me under Hacktoberfest with hacktoberfest label inserted on the issue"
] | |
https://api.github.com/repos/huggingface/datasets/issues/7097 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7097/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7097/comments | https://api.github.com/repos/huggingface/datasets/issues/7097/events | https://github.com/huggingface/datasets/issues/7097 | 2,458,455,489 | I_kwDODunzps6SiQ3B | 7,097 | Some of DownloadConfig's properties are always being overridden in load.py | [] | open | false | null | [] | null | 0 | 1,723 | 1,723 | null | NONE | null | null | ### Describe the bug
The `extract_compressed_file` and `force_extract` properties of DownloadConfig are always being set to True in the function `dataset_module_factory` in the `load.py` file. This behavior is very annoying because data extracted will just be ignored the next time the dataset is loaded.
See this image below:

### Steps to reproduce the bug
1. Have a local dataset that contains archived files (zip, tar.gz, etc)
2. Build a dataset loading script to download and extract these files
3. Run the load_dataset function with a DownloadConfig that specifically set `force_extract` to False
4. The extraction process will start no matter if the archives was extracted previously
### Expected behavior
The extraction process should not run when the archives were previously extracted and `force_extract` is set to False.
### Environment info
datasets==2.20.0
python3.9 | null | https://api.github.com/repos/huggingface/datasets/issues/7097/timeline | null | null | ductai199x | 29,772,899 | MDQ6VXNlcjI5NzcyODk5 | https://avatars.githubusercontent.com/u/29772899?v=4 | https://api.github.com/users/ductai199x | https://github.com/ductai199x | https://api.github.com/users/ductai199x/followers | https://api.github.com/users/ductai199x/following{/other_user} | https://api.github.com/users/ductai199x/gists{/gist_id} | https://api.github.com/users/ductai199x/starred{/owner}{/repo} | https://api.github.com/users/ductai199x/subscriptions | https://api.github.com/users/ductai199x/orgs | https://api.github.com/users/ductai199x/repos | https://api.github.com/users/ductai199x/events{/privacy} | https://api.github.com/users/ductai199x/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7097/reactions | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |
https://api.github.com/repos/huggingface/datasets/issues/7093 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7093/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7093/comments | https://api.github.com/repos/huggingface/datasets/issues/7093/events | https://github.com/huggingface/datasets/issues/7093 | 2,454,413,074 | I_kwDODunzps6SS18S | 7,093 | Add Arabic Docs to datasets | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 1,723 | 1,723 | null | NONE | null | null | ### Feature request
Add Arabic Docs to datasets
[Datasets Arabic](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx)
### Motivation
@AhmedAlmaghz
https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx
### Your contribution
@AhmedAlmaghz
https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx | null | https://api.github.com/repos/huggingface/datasets/issues/7093/timeline | null | null | AhmedAlmaghz | 53,489,256 | MDQ6VXNlcjUzNDg5MjU2 | https://avatars.githubusercontent.com/u/53489256?v=4 | https://api.github.com/users/AhmedAlmaghz | https://github.com/AhmedAlmaghz | https://api.github.com/users/AhmedAlmaghz/followers | https://api.github.com/users/AhmedAlmaghz/following{/other_user} | https://api.github.com/users/AhmedAlmaghz/gists{/gist_id} | https://api.github.com/users/AhmedAlmaghz/starred{/owner}{/repo} | https://api.github.com/users/AhmedAlmaghz/subscriptions | https://api.github.com/users/AhmedAlmaghz/orgs | https://api.github.com/users/AhmedAlmaghz/repos | https://api.github.com/users/AhmedAlmaghz/events{/privacy} | https://api.github.com/users/AhmedAlmaghz/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7093/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |
https://api.github.com/repos/huggingface/datasets/issues/7092 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7092/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7092/comments | https://api.github.com/repos/huggingface/datasets/issues/7092/events | https://github.com/huggingface/datasets/issues/7092 | 2,451,393,658 | I_kwDODunzps6SHUx6 | 7,092 | load_dataset with multiple jsonlines files interprets datastructure too early | [] | open | false | null | [] | null | 5 | 1,722 | 1,723 | null | NONE | null | null | ### Describe the bug
likely related to #6460
using `datasets.load_dataset("json", data_dir= ... )` with multiple `.jsonl` files will error if one of the files (maybe the first file?) contains a full column of empty data.
### Steps to reproduce the bug
real world example:
data is available in this [PR-branch](https://github.com/Vipitis/shadertoys-dataset/pull/3/commits/cb1e7157814f74acb09d5dc2f1be3c0a868a9933). Because my files are chunked by months, some months contain all empty data for some columns, just by chance - these are `[]`. Otherwise it's all the same structure.
```python
from datasets import load_dataset
ds = load_dataset("json", data_dir="./data/annotated/api")
```
you get a long error trace, where in the middle it says something like
```cs
TypeError: Couldn't cast array of type struct<id: int64, src: string, ctype: string, channel: int64, sampler: struct<filter: string, wrap: string, vflip: string, srgb: string, internal: string>, published: int64> to null
```
toy example: (on request)
### Expected behavior
Some suggestions
1. give a better error message to the user
2. consider all files before deciding on a data structure for a given column.
3. if you encounter a new structure, and can't cast that to null, replace the null-hypothesis. (maybe something for pyarrow)
as a workaround I have lazily implemented the following (essentially step 2)
```python
import os
import jsonlines
import datasets
api_files = os.listdir("./data/annotated/api")
api_files = [f"./data/annotated/api/{f}" for f in api_files]
api_file_contents = []
for f in api_files:
with jsonlines.open(f) as reader:
for obj in reader:
api_file_contents.append(obj)
ds = datasets.Dataset.from_list(api_file_contents)
```
this works fine for my usecase, but is potentially slower and less memory efficient for really large datasets (where this is unlikely to happen in the first place).
### Environment info
- `datasets` version: 2.20.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.4
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2023.10.0 | null | https://api.github.com/repos/huggingface/datasets/issues/7092/timeline | null | null | Vipitis | 23,384,483 | MDQ6VXNlcjIzMzg0NDgz | https://avatars.githubusercontent.com/u/23384483?v=4 | https://api.github.com/users/Vipitis | https://github.com/Vipitis | https://api.github.com/users/Vipitis/followers | https://api.github.com/users/Vipitis/following{/other_user} | https://api.github.com/users/Vipitis/gists{/gist_id} | https://api.github.com/users/Vipitis/starred{/owner}{/repo} | https://api.github.com/users/Vipitis/subscriptions | https://api.github.com/users/Vipitis/orgs | https://api.github.com/users/Vipitis/repos | https://api.github.com/users/Vipitis/events{/privacy} | https://api.github.com/users/Vipitis/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7092/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"I’ll take a look",
"Possible definitions of done for this issue:\r\n\r\n1. A fix so you can load your dataset specifically\r\n2. A general fix for datasets similar to this in the `datasets` library\r\n\r\nOption 1 is trivial. I think option 2 requires significant changes to the library.\r\n\r\nSince you outlined... | |
https://api.github.com/repos/huggingface/datasets/issues/7090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7090/comments | https://api.github.com/repos/huggingface/datasets/issues/7090/events | https://github.com/huggingface/datasets/issues/7090 | 2,449,699,490 | I_kwDODunzps6SA3Ki | 7,090 | The test test_move_script_doesnt_change_hash fails because it runs the 'python' command while the python executable has a different name | [] | open | false | null | [] | null | 0 | 1,722 | 1,722 | null | NONE | null | null | ### Describe the bug
Tests should use the same pythin path as they are launched with, which in the case of FreeBSD is /usr/local/bin/python3.11
Failure:
```
if err_filename is not None:
> raise child_exception_type(errno_num, err_msg, err_filename)
E FileNotFoundError: [Errno 2] No such file or directory: 'python'
```
### Steps to reproduce the bug
regular test run using PyTest
### Expected behavior
n/a
### Environment info
FreeBSD 14.1 | null | https://api.github.com/repos/huggingface/datasets/issues/7090/timeline | null | null | yurivict | 271,906 | MDQ6VXNlcjI3MTkwNg== | https://avatars.githubusercontent.com/u/271906?v=4 | https://api.github.com/users/yurivict | https://github.com/yurivict | https://api.github.com/users/yurivict/followers | https://api.github.com/users/yurivict/following{/other_user} | https://api.github.com/users/yurivict/gists{/gist_id} | https://api.github.com/users/yurivict/starred{/owner}{/repo} | https://api.github.com/users/yurivict/subscriptions | https://api.github.com/users/yurivict/orgs | https://api.github.com/users/yurivict/repos | https://api.github.com/users/yurivict/events{/privacy} | https://api.github.com/users/yurivict/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7090/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |
https://api.github.com/repos/huggingface/datasets/issues/7089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7089/comments | https://api.github.com/repos/huggingface/datasets/issues/7089/events | https://github.com/huggingface/datasets/issues/7089 | 2,449,479,500 | I_kwDODunzps6SABdM | 7,089 | Missing pyspark dependency causes the testsuite to error out, instead of a few tests to be skipped | [] | open | false | null | [] | null | 0 | 1,722 | 1,722 | null | NONE | null | null | ### Describe the bug
see the subject
### Steps to reproduce the bug
regular tests
### Expected behavior
n/a
### Environment info
version 2.20.0 | null | https://api.github.com/repos/huggingface/datasets/issues/7089/timeline | null | null | yurivict | 271,906 | MDQ6VXNlcjI3MTkwNg== | https://avatars.githubusercontent.com/u/271906?v=4 | https://api.github.com/users/yurivict | https://github.com/yurivict | https://api.github.com/users/yurivict/followers | https://api.github.com/users/yurivict/following{/other_user} | https://api.github.com/users/yurivict/gists{/gist_id} | https://api.github.com/users/yurivict/starred{/owner}{/repo} | https://api.github.com/users/yurivict/subscriptions | https://api.github.com/users/yurivict/orgs | https://api.github.com/users/yurivict/repos | https://api.github.com/users/yurivict/events{/privacy} | https://api.github.com/users/yurivict/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7089/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |
https://api.github.com/repos/huggingface/datasets/issues/7088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7088/comments | https://api.github.com/repos/huggingface/datasets/issues/7088/events | https://github.com/huggingface/datasets/issues/7088 | 2,447,383,940 | I_kwDODunzps6R4B2E | 7,088 | Disable warning when using with_format format on tensors | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 1,722 | 1,722 | null | NONE | null | null | ### Feature request
If we write this code:
```python
"""Get data and define datasets."""
from enum import StrEnum
from datasets import load_dataset
from torch.utils.data import DataLoader
from torchvision import transforms
class Split(StrEnum):
"""Describes what type of split to use in the dataloader"""
TRAIN = "train"
TEST = "test"
VAL = "validation"
class ImageNetDataLoader(DataLoader):
"""Create an ImageNetDataloader"""
_preprocess_transform = transforms.Compose(
[
transforms.Resize(256),
transforms.CenterCrop(224),
]
)
def __init__(self, batch_size: int = 4, split: Split = Split.TRAIN):
dataset = (
load_dataset(
"imagenet-1k",
split=split,
trust_remote_code=True,
streaming=True,
)
.with_format("torch")
.map(self._preprocess)
)
super().__init__(dataset=dataset, batch_size=batch_size)
def _preprocess(self, data):
if data["image"].shape[0] < 3:
data["image"] = data["image"].repeat(3, 1, 1)
data["image"] = self._preprocess_transform(data["image"].float())
return data
if __name__ == "__main__":
dataloader = ImageNetDataLoader(batch_size=2)
for batch in dataloader:
print(batch["image"])
break
```
This will trigger an user warning :
```bash
datasets\formatting\torch_formatter.py:85: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs})
```
### Motivation
This happens because the the way the formatted tensor is returned in `TorchFormatter._tensorize`.
This function handle values of different types, according to some tests it seems that possible value types are `int`, `numpy.ndarray` and `torch.Tensor`.
In particular this warning is triggered when the value type is `torch.Tensor`, because is not the suggested Pytorch way of doing it:
- https://stackoverflow.com/questions/55266154/pytorch-preferred-way-to-copy-a-tensor
- https://discuss.pytorch.org/t/it-is-recommended-to-use-source-tensor-clone-detach-or-sourcetensor-clone-detach-requires-grad-true/101218#:~:text=The%20warning%20points%20to%20wrapping%20a%20tensor%20in%20torch.tensor%2C%20which%20is%20not%20recommended.%0AInstead%20of%20torch.tensor(outputs)%20use%20outputs.clone().detach()%20or%20the%20same%20with%20.requires_grad_(True)%2C%20if%20necessary.
### Your contribution
A solution that I found to be working is to change the current way of doing it:
```python
return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs})
```
To:
```python
if (isinstance(value, torch.Tensor)):
tensor = value.clone().detach()
if self.torch_tensor_kwargs.get('requires_grad', False):
tensor.requires_grad_()
return tensor
else:
return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs})
``` | null | https://api.github.com/repos/huggingface/datasets/issues/7088/timeline | null | null | Haislich | 42,048,782 | MDQ6VXNlcjQyMDQ4Nzgy | https://avatars.githubusercontent.com/u/42048782?v=4 | https://api.github.com/users/Haislich | https://github.com/Haislich | https://api.github.com/users/Haislich/followers | https://api.github.com/users/Haislich/following{/other_user} | https://api.github.com/users/Haislich/gists{/gist_id} | https://api.github.com/users/Haislich/starred{/owner}{/repo} | https://api.github.com/users/Haislich/subscriptions | https://api.github.com/users/Haislich/orgs | https://api.github.com/users/Haislich/repos | https://api.github.com/users/Haislich/events{/privacy} | https://api.github.com/users/Haislich/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7088/reactions | 3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |
https://api.github.com/repos/huggingface/datasets/issues/7087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7087/comments | https://api.github.com/repos/huggingface/datasets/issues/7087/events | https://github.com/huggingface/datasets/issues/7087 | 2,447,158,643 | I_kwDODunzps6R3K1z | 7,087 | Unable to create dataset card for Lushootseed language | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 2 | 1,722 | 1,722 | 1,722 | NONE | null | null | ### Feature request
While I was creating the dataset which contained all documents from the Lushootseed Wikipedia, the dataset card asked me to enter which language the dataset was in. Since Lushootseed is a critically endangered language, it was not available as one of the options. Is it possible to allow entering languages that aren't available in the options?
### Motivation
I'd like to add more information about my dataset in the dataset card, and the language is one of the most important pieces of information, since the entire dataset is primarily concerned collecting Lushootseed documents.
### Your contribution
I can submit a pull request | null | https://api.github.com/repos/huggingface/datasets/issues/7087/timeline | null | completed | vaishnavsudarshan | 134,876,525 | U_kgDOCAoNbQ | https://avatars.githubusercontent.com/u/134876525?v=4 | https://api.github.com/users/vaishnavsudarshan | https://github.com/vaishnavsudarshan | https://api.github.com/users/vaishnavsudarshan/followers | https://api.github.com/users/vaishnavsudarshan/following{/other_user} | https://api.github.com/users/vaishnavsudarshan/gists{/gist_id} | https://api.github.com/users/vaishnavsudarshan/starred{/owner}{/repo} | https://api.github.com/users/vaishnavsudarshan/subscriptions | https://api.github.com/users/vaishnavsudarshan/orgs | https://api.github.com/users/vaishnavsudarshan/repos | https://api.github.com/users/vaishnavsudarshan/events{/privacy} | https://api.github.com/users/vaishnavsudarshan/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7087/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Thanks for reporting.\r\n\r\nIt is weird, because the language entry is in the list. See: https://github.com/huggingface/huggingface.js/blob/98e32f0ed4ee057a596f66a1dec738e5db9643d5/packages/languages/src/languages_iso_639_3.ts#L15186-L15189\r\n\r\nI have reported the issue:\r\n- https://github.com/huggingface/hug... | |||
https://api.github.com/repos/huggingface/datasets/issues/7086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7086/comments | https://api.github.com/repos/huggingface/datasets/issues/7086/events | https://github.com/huggingface/datasets/issues/7086 | 2,445,516,829 | I_kwDODunzps6Rw6Ad | 7,086 | load_dataset ignores cached datasets and tries to hit HF Hub, resulting in API rate limit errors | [] | open | false | null | [] | null | 1 | 1,722 | 1,750 | null | NONE | null | null | ### Describe the bug
I have been running lm-eval-harness a lot which has results in an API rate limit. This seems strange, since all of the data should be cached locally. I have in fact verified this.
### Steps to reproduce the bug
1. Be Me
2. Run `load_dataset("TAUR-Lab/MuSR")`
3. Hit rate limit error
4. Dataset is in .cache/huggingface/datasets
5. ???
### Expected behavior
We should not run into API rate limits if we have cached the dataset
### Environment info
datasets 2.16.0
python 3.10.4 | null | https://api.github.com/repos/huggingface/datasets/issues/7086/timeline | null | null | tginart | 11,379,648 | MDQ6VXNlcjExMzc5NjQ4 | https://avatars.githubusercontent.com/u/11379648?v=4 | https://api.github.com/users/tginart | https://github.com/tginart | https://api.github.com/users/tginart/followers | https://api.github.com/users/tginart/following{/other_user} | https://api.github.com/users/tginart/gists{/gist_id} | https://api.github.com/users/tginart/starred{/owner}{/repo} | https://api.github.com/users/tginart/subscriptions | https://api.github.com/users/tginart/orgs | https://api.github.com/users/tginart/repos | https://api.github.com/users/tginart/events{/privacy} | https://api.github.com/users/tginart/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7086/reactions | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"I'm having the same issue - running into rate limits when doing hyperparameter tuning even though the dataset is supposed to be cached. I feel like this behaviour should at the very least be documented, but honestly you should just not be running into rate limits in the first place when the dataset is cached. It e... | |
https://api.github.com/repos/huggingface/datasets/issues/7085 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7085/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7085/comments | https://api.github.com/repos/huggingface/datasets/issues/7085/events | https://github.com/huggingface/datasets/issues/7085 | 2,440,008,618 | I_kwDODunzps6Rb5Oq | 7,085 | [Regression] IterableDataset is broken on 2.20.0 | [] | closed | false | null | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | 3 | 1,722 | 1,724 | 1,724 | NONE | null | null | ### Describe the bug
In the latest version of datasets there is a major regression, after creating an `IterableDataset` from a generator and applying a few operations (`map`, `select`), you can no longer iterate through the dataset multiple times.
The issue seems to stem from the recent addition of "resumable IterableDatasets" (#6658) (@lhoestq). It seems like it's keeping state when it shouldn't.
### Steps to reproduce the bug
Minimal Reproducible Example (comparing `datasets==2.17.0` and `datasets==2.20.0`)
```
#!/bin/bash
# List of dataset versions to test
versions=("2.17.0" "2.20.0")
# Loop through each version
for version in "${versions[@]}"; do
# Install the specific version of the datasets library
pip3 install -q datasets=="$version" 2>/dev/null
# Run the Python script
python3 - <<EOF
from datasets import IterableDataset
from datasets.features.features import Features, Value
def test_gen():
yield from [{"foo": i} for i in range(10)]
features = Features([("foo", Value("int64"))])
d = IterableDataset.from_generator(test_gen, features=features)
mapped = d.map(lambda row: {"foo": row["foo"] * 2})
column = mapped.select_columns(["foo"])
print("Version $version - Iterate Once:", list(column))
print("Version $version - Iterate Twice:", list(column))
EOF
done
```
The output looks like this:
```
Version 2.17.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.17.0 - Iterate Twice: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.20.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.20.0 - Iterate Twice: []
```
### Expected behavior
The expected behavior is it version 2.20.0 should behave the same as 2.17.0.
### Environment info
`datasets==2.20.0` on any platform. | null | https://api.github.com/repos/huggingface/datasets/issues/7085/timeline | null | completed | AjayP13 | 5,404,177 | MDQ6VXNlcjU0MDQxNzc= | https://avatars.githubusercontent.com/u/5404177?v=4 | https://api.github.com/users/AjayP13 | https://github.com/AjayP13 | https://api.github.com/users/AjayP13/followers | https://api.github.com/users/AjayP13/following{/other_user} | https://api.github.com/users/AjayP13/gists{/gist_id} | https://api.github.com/users/AjayP13/starred{/owner}{/repo} | https://api.github.com/users/AjayP13/subscriptions | https://api.github.com/users/AjayP13/orgs | https://api.github.com/users/AjayP13/repos | https://api.github.com/users/AjayP13/events{/privacy} | https://api.github.com/users/AjayP13/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7085/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | lhoestq | 42,851,186 | MDQ6VXNlcjQyODUxMTg2 | https://avatars.githubusercontent.com/u/42851186?v=4 | https://api.github.com/users/lhoestq | https://github.com/lhoestq | https://api.github.com/users/lhoestq/followers | https://api.github.com/users/lhoestq/following{/other_user} | https://api.github.com/users/lhoestq/gists{/gist_id} | https://api.github.com/users/lhoestq/starred{/owner}{/repo} | https://api.github.com/users/lhoestq/subscriptions | https://api.github.com/users/lhoestq/orgs | https://api.github.com/users/lhoestq/repos | https://api.github.com/users/lhoestq/events{/privacy} | https://api.github.com/users/lhoestq/received_events | User | public | false | lhoestq | 42,851,186 | MDQ6VXNlcjQyODUxMTg2 | https://avatars.githubusercontent.com/u/42851186?v=4 | https://api.github.com/users/lhoestq | https://github.com/lhoestq | https://api.github.com/users/lhoestq/followers | https://api.github.com/users/lhoestq/following{/other_user} | https://api.github.com/users/lhoestq/gists{/gist_id} | https://api.github.com/users/lhoestq/starred{/owner}{/repo} | https://api.github.com/users/lhoestq/subscriptions | https://api.github.com/users/lhoestq/orgs | https://api.github.com/users/lhoestq/repos | https://api.github.com/users/lhoestq/events{/privacy} | https://api.github.com/users/lhoestq/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"@lhoestq I detected this regression over on [DataDreamer](https://github.com/datadreamer-dev/DataDreamer)'s test suite. I put in these [monkey patches](https://github.com/datadreamer-dev/DataDreamer/blob/4cbaf9f39cf7bedde72bbaa68346e169788fbecb/src/_patches/datasets_reset_state_hack.py) in case that fixed it our t... | |||
https://api.github.com/repos/huggingface/datasets/issues/7084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7084/comments | https://api.github.com/repos/huggingface/datasets/issues/7084/events | https://github.com/huggingface/datasets/issues/7084 | 2,439,519,534 | I_kwDODunzps6RaB0u | 7,084 | More easily support streaming local files | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 1,722 | 1,722 | null | CONTRIBUTOR | null | null | ### Feature request
Simplify downloading and streaming datasets locally. Specifically, perhaps add an option to `load_dataset(..., streaming="download_first")` or add better support for streaming symlinked or arrow files.
### Motivation
I have downloaded FineWeb-edu locally and currently trying to stream the dataset from the local files. I have both the raw parquet files using `hugginface-cli download --repo-type dataset HuggingFaceFW/fineweb-edu` and the processed arrow files using `load_dataset("HuggingFaceFW/fineweb-edu")`.
Streaming the files locally does not work well for both file types for two different reasons.
**Arrow files**
When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/datasets/HuggingFaceFW___fineweb-edu/default/0.0.0/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/fineweb-edu-train-*.arrow"})` resolving the data files is fast, but because `arrow` is not included in the known [extensions file list](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/utils/file_utils.py#L738) , all files are opened and scanned to determine the compression type. Adding `arrow` to the known extension types resolves this issue.
**Parquet files**
When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/hub/dataset-HuggingFaceFW___fineweb-edu/snapshots/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/data/CC-MAIN-*/train-*.parquet"})` the paths do not get resolved because the parquet files are symlinked from the blobs (which contain all files in case there are different versions). This occurs because the [pattern matching](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/data_files.py#L389) checks if the path is a file and does not check for symlinks. Symlinks (at least on my machine) are of type "other".
### Your contribution
I have created a PR for fixing arrow file streaming and symlinks. However, I have not checked locally if the tests work or new tests need to be added.
IMO, the easiest option would be to add a `streaming=download_first` option, but I'm afraid that exceeds my current knowledge of how the datasets library works. https://github.com/huggingface/datasets/pull/7083 | null | https://api.github.com/repos/huggingface/datasets/issues/7084/timeline | null | null | fschlatt | 23,191,892 | MDQ6VXNlcjIzMTkxODky | https://avatars.githubusercontent.com/u/23191892?v=4 | https://api.github.com/users/fschlatt | https://github.com/fschlatt | https://api.github.com/users/fschlatt/followers | https://api.github.com/users/fschlatt/following{/other_user} | https://api.github.com/users/fschlatt/gists{/gist_id} | https://api.github.com/users/fschlatt/starred{/owner}{/repo} | https://api.github.com/users/fschlatt/subscriptions | https://api.github.com/users/fschlatt/orgs | https://api.github.com/users/fschlatt/repos | https://api.github.com/users/fschlatt/events{/privacy} | https://api.github.com/users/fschlatt/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7084/reactions | 3 | 0 | 0 | 0 | 0 | 0 | 2 | 0 | 1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |
https://api.github.com/repos/huggingface/datasets/issues/7080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7080/comments | https://api.github.com/repos/huggingface/datasets/issues/7080/events | https://github.com/huggingface/datasets/issues/7080 | 2,434,275,664 | I_kwDODunzps6RGBlQ | 7,080 | Generating train split takes a long time | [] | open | false | null | [] | null | 2 | 1,722 | 1,727 | null | NONE | null | null | ### Describe the bug
Loading a simple webdataset takes ~45 minutes.
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("PixArt-alpha/SAM-LLaVA-Captions10M")
```
### Expected behavior
The dataset should load immediately as it does when loaded through a normal indexed WebDataset loader. Generating splits should be optional and there should be a message showing how to disable it.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.1
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | null | https://api.github.com/repos/huggingface/datasets/issues/7080/timeline | null | null | alexanderswerdlow | 35,648,800 | MDQ6VXNlcjM1NjQ4ODAw | https://avatars.githubusercontent.com/u/35648800?v=4 | https://api.github.com/users/alexanderswerdlow | https://github.com/alexanderswerdlow | https://api.github.com/users/alexanderswerdlow/followers | https://api.github.com/users/alexanderswerdlow/following{/other_user} | https://api.github.com/users/alexanderswerdlow/gists{/gist_id} | https://api.github.com/users/alexanderswerdlow/starred{/owner}{/repo} | https://api.github.com/users/alexanderswerdlow/subscriptions | https://api.github.com/users/alexanderswerdlow/orgs | https://api.github.com/users/alexanderswerdlow/repos | https://api.github.com/users/alexanderswerdlow/events{/privacy} | https://api.github.com/users/alexanderswerdlow/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7080/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"@alexanderswerdlow \r\nWhen no specific split is mentioned, the load_dataset library will load all available splits of the dataset. For example, if a dataset has \"train\" and \"test\" splits, the load_dataset function will load both into the DatasetDict object.\r\n\r\n Gecko/20100101 Firefox/127.0
```
### Response headers
```
X-Firefox-Spdy h2
access-control-allow-origin https://huggingface.co
access-control-expose-headers X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range
content-length 80
content-type application/json; charset=utf-8
cross-origin-opener-policy same-origin
date Fri, 26 Jul 2024 19:09:45 GMT
etag W/"50-9qrwU+BNI4SD0Fe32p/nofkmv0c"
referrer-policy strict-origin-when-cross-origin
vary Origin
via 1.1 1624c79cd07e6098196697a6a7907e4a.cloudfront.net (CloudFront)
x-amz-cf-id SP8E7n5qRaP6i9c9G83dNAiOzJBU4GXSrDRAcVNTomY895K35H0nJQ==
x-amz-cf-pop CPH50-C1
x-cache Error from cloudfront
x-error-message Internal Error - We're working hard to fix this as soon as possible!
x-powered-by huggingface-moon
x-request-id Root=1-66a3f479-026417465ef42f49349fdca1
``` | null | https://api.github.com/repos/huggingface/datasets/issues/7079/timeline | null | completed | neoneye | 147,971 | MDQ6VXNlcjE0Nzk3MQ== | https://avatars.githubusercontent.com/u/147971?v=4 | https://api.github.com/users/neoneye | https://github.com/neoneye | https://api.github.com/users/neoneye/followers | https://api.github.com/users/neoneye/following{/other_user} | https://api.github.com/users/neoneye/gists{/gist_id} | https://api.github.com/users/neoneye/starred{/owner}{/repo} | https://api.github.com/users/neoneye/subscriptions | https://api.github.com/users/neoneye/orgs | https://api.github.com/users/neoneye/repos | https://api.github.com/users/neoneye/events{/privacy} | https://api.github.com/users/neoneye/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7079/reactions | 7 | 0 | 0 | 0 | 0 | 0 | 3 | 0 | 4 | null | null | null | null | null | null | neoneye | 147,971 | MDQ6VXNlcjE0Nzk3MQ== | https://avatars.githubusercontent.com/u/147971?v=4 | https://api.github.com/users/neoneye | https://github.com/neoneye | https://api.github.com/users/neoneye/followers | https://api.github.com/users/neoneye/following{/other_user} | https://api.github.com/users/neoneye/gists{/gist_id} | https://api.github.com/users/neoneye/starred{/owner}{/repo} | https://api.github.com/users/neoneye/subscriptions | https://api.github.com/users/neoneye/orgs | https://api.github.com/users/neoneye/repos | https://api.github.com/users/neoneye/events{/privacy} | https://api.github.com/users/neoneye/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"same issue here. @albertvillanova @lhoestq ",
"Also impacted by this issue in many of my datasets (though not all) - in my case, this also seems to affect datasets that have been updated recently. Git cloning and the web interface still work:\r\n- https://huggingface.co/api/datasets/acmc/cheat_reduced\r\n- https... | ||
https://api.github.com/repos/huggingface/datasets/issues/7077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7077/comments | https://api.github.com/repos/huggingface/datasets/issues/7077/events | https://github.com/huggingface/datasets/issues/7077 | 2,432,345,489 | I_kwDODunzps6Q-qWR | 7,077 | column_names ignored by load_dataset() when loading CSV file | [] | open | false | null | [] | null | 1 | 1,722 | 1,722 | null | NONE | null | null | ### Describe the bug
load_dataset() ignores the column_names kwarg when loading a CSV file. Instead, it uses whatever values are on the first line of the file.
### Steps to reproduce the bug
Call `load_dataset` to load data from a CSV file and specify `column_names` kwarg.
### Expected behavior
The resulting dataset should have the specified column names **and** the first line of the file should be considered as data values.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.10.0-30-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- `huggingface_hub` version: 0.24.2
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | null | https://api.github.com/repos/huggingface/datasets/issues/7077/timeline | null | null | luismsgomes | 9,130,265 | MDQ6VXNlcjkxMzAyNjU= | https://avatars.githubusercontent.com/u/9130265?v=4 | https://api.github.com/users/luismsgomes | https://github.com/luismsgomes | https://api.github.com/users/luismsgomes/followers | https://api.github.com/users/luismsgomes/following{/other_user} | https://api.github.com/users/luismsgomes/gists{/gist_id} | https://api.github.com/users/luismsgomes/starred{/owner}{/repo} | https://api.github.com/users/luismsgomes/subscriptions | https://api.github.com/users/luismsgomes/orgs | https://api.github.com/users/luismsgomes/repos | https://api.github.com/users/luismsgomes/events{/privacy} | https://api.github.com/users/luismsgomes/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7077/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"I confirm that `column_names` values are not copied to `names` variable because in this case `CsvConfig.__post_init__` is not called: `CsvConfig` is instantiated with default values and afterwards the `config_kwargs` are used to overwrite its attributes.\r\n\r\n@luismsgomes in the meantime, you can avoid the bug i... | |
https://api.github.com/repos/huggingface/datasets/issues/7073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7073/comments | https://api.github.com/repos/huggingface/datasets/issues/7073/events | https://github.com/huggingface/datasets/issues/7073 | 2,431,706,568 | I_kwDODunzps6Q8OXI | 7,073 | CI is broken for convert_to_parquet: Invalid rev id: refs/pr/1 404 error causes RevisionNotFoundError | [] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 9 | 1,721 | 1,722 | 1,721 | MEMBER | null | null | See: https://github.com/huggingface/datasets/actions/runs/10095313567/job/27915185756
```
FAILED tests/test_hub.py::test_convert_to_parquet - huggingface_hub.utils._errors.RevisionNotFoundError: 404 Client Error. (Request ID: Root=1-66a25839-31ce7b475e70e7db1e4d44c2;b0c8870f-d5ef-4bf2-a6ff-0191f3df0f64)
Revision Not Found for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5188a8-17219154347516/preupload/refs%2Fpr%2F1.
Invalid rev id: refs/pr/1
```
```
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/hub.py:86: in convert_to_parquet
dataset.push_to_hub(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/dataset_dict.py:1722: in push_to_hub
split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/arrow_dataset.py:5511: in _push_parquet_shards_to_hub
api.preupload_lfs_files(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/hf_api.py:4231: in preupload_lfs_files
_fetch_upload_modes(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:118: in _inner_fn
return fn(*args, **kwargs)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/_commit_api.py:507: in _fetch_upload_modes
hf_raise_for_status(resp)
``` | null | https://api.github.com/repos/huggingface/datasets/issues/7073/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7073/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Any recent change in the API backend rejecting parameter `revision=\"refs/pr/1\"` to `HfApi.preupload_lfs_files`?\r\n```\r\nf\"{endpoint}/api/{repo_type}s/{repo_id}/preupload/{revision}\"\r\n\r\nhttps://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5188a8-17219154347516/preupload/refs... | |||
https://api.github.com/repos/huggingface/datasets/issues/7072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7072/comments | https://api.github.com/repos/huggingface/datasets/issues/7072/events | https://github.com/huggingface/datasets/issues/7072 | 2,430,577,916 | I_kwDODunzps6Q36z8 | 7,072 | nm | [] | closed | false | null | [] | null | 0 | 1,721 | 1,721 | 1,721 | NONE | null | null | null | null | https://api.github.com/repos/huggingface/datasets/issues/7072/timeline | null | not_planned | brettdavies | 26,392,883 | MDQ6VXNlcjI2MzkyODgz | https://avatars.githubusercontent.com/u/26392883?v=4 | https://api.github.com/users/brettdavies | https://github.com/brettdavies | https://api.github.com/users/brettdavies/followers | https://api.github.com/users/brettdavies/following{/other_user} | https://api.github.com/users/brettdavies/gists{/gist_id} | https://api.github.com/users/brettdavies/starred{/owner}{/repo} | https://api.github.com/users/brettdavies/subscriptions | https://api.github.com/users/brettdavies/orgs | https://api.github.com/users/brettdavies/repos | https://api.github.com/users/brettdavies/events{/privacy} | https://api.github.com/users/brettdavies/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7072/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | brettdavies | 26,392,883 | MDQ6VXNlcjI2MzkyODgz | https://avatars.githubusercontent.com/u/26392883?v=4 | https://api.github.com/users/brettdavies | https://github.com/brettdavies | https://api.github.com/users/brettdavies/followers | https://api.github.com/users/brettdavies/following{/other_user} | https://api.github.com/users/brettdavies/gists{/gist_id} | https://api.github.com/users/brettdavies/starred{/owner}{/repo} | https://api.github.com/users/brettdavies/subscriptions | https://api.github.com/users/brettdavies/orgs | https://api.github.com/users/brettdavies/repos | https://api.github.com/users/brettdavies/events{/privacy} | https://api.github.com/users/brettdavies/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | ||
https://api.github.com/repos/huggingface/datasets/issues/7071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7071/comments | https://api.github.com/repos/huggingface/datasets/issues/7071/events | https://github.com/huggingface/datasets/issues/7071 | 2,430,313,011 | I_kwDODunzps6Q26Iz | 7,071 | Filter hangs | [] | open | false | null | [] | null | 0 | 1,721 | 1,721 | null | NONE | null | null | ### Describe the bug
When trying to filter my custom dataset, the process hangs, regardless of the lambda function used. It appears to be an issue with the way the Images are being handled. The dataset in question is a preprocessed version of https://huggingface.co/datasets/danaaubakirova/patfig where notably, I have converted the data to the Parquet format.
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('lcolonn/patfig', split='test')
ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y')
```
Eventually I ctrl+C and I obtain this stack trace:
```
>>> ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y')
Filter: 0%| | 0/998 [00:00<?, ? examples/s]Filter: 0%| | 0/998 [00:35<?, ? examples/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/fingerprint.py", line 482, in wrapper
out = func(dataset, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3714, in filter
indices = self.map(
^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 602, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3161, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3552, in _map_single
batch = apply_function_on_filtered_inputs(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3421, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 6478, in get_indices_from_mask_function
num_examples = len(batch[next(iter(batch.keys()))])
~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 273, in __getitem__
value = self.format(key)
^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 376, in format
return self.formatter.format_column(self.pa_table.select([key]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 443, in format_column
column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 219, in decode_column
return self.features.decode_column(column, column_name) if self.features else column
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in decode_column
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in <listcomp>
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 1351, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/image.py", line 188, in decode_example
image.load() # to avoid "Too many open files" errors
^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/PIL/ImageFile.py", line 293, in load
n, err_code = decoder.decode(b)
^^^^^^^^^^^^^^^^^
KeyboardInterrupt
```
Warning! This can even seem to cause some computers to crash.
### Expected behavior
Should return the filtered dataset
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- `huggingface_hub` version: 0.24.0
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | null | https://api.github.com/repos/huggingface/datasets/issues/7071/timeline | null | null | lucienwalewski | 61,711,045 | MDQ6VXNlcjYxNzExMDQ1 | https://avatars.githubusercontent.com/u/61711045?v=4 | https://api.github.com/users/lucienwalewski | https://github.com/lucienwalewski | https://api.github.com/users/lucienwalewski/followers | https://api.github.com/users/lucienwalewski/following{/other_user} | https://api.github.com/users/lucienwalewski/gists{/gist_id} | https://api.github.com/users/lucienwalewski/starred{/owner}{/repo} | https://api.github.com/users/lucienwalewski/subscriptions | https://api.github.com/users/lucienwalewski/orgs | https://api.github.com/users/lucienwalewski/repos | https://api.github.com/users/lucienwalewski/events{/privacy} | https://api.github.com/users/lucienwalewski/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7071/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |
https://api.github.com/repos/huggingface/datasets/issues/7070 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7070/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7070/comments | https://api.github.com/repos/huggingface/datasets/issues/7070/events | https://github.com/huggingface/datasets/issues/7070 | 2,430,285,235 | I_kwDODunzps6Q2zWz | 7,070 | how set_transform affects batch size? | [] | open | false | null | [] | null | 0 | 1,721 | 1,721 | null | NONE | null | null | ### Describe the bug
I am trying to fine-tune w2v-bert for ASR task. Since my dataset is so big, I preferred to use the on-the-fly method with set_transform. So i change the preprocessing function to this:
```
def prepare_dataset(batch):
input_features = processor(batch["audio"], sampling_rate=16000).input_features[0]
input_length = len(input_features)
labels = processor.tokenizer(batch["text"], padding=False).input_ids
batch = {
"input_features": [input_features],
"input_length": [input_length],
"labels": [labels]
}
return batch
train_ds.set_transform(prepare_dataset)
val_ds.set_transform(prepare_dataset)
```
After this, I also had to change the DataCollatorCTCWithPadding class like this:
```
@dataclass
class DataCollatorCTCWithPadding:
processor: Wav2Vec2BertProcessor
padding: Union[bool, str] = True
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# Separate input_features and labels
input_features = [{"input_features": feature["input_features"][0]} for feature in features]
labels = [feature["labels"][0] for feature in features]
# Pad input features
batch = self.processor.pad(
input_features,
padding=self.padding,
return_tensors="pt",
)
# Pad and process labels
label_features = self.processor.tokenizer.pad(
{"input_ids": labels},
padding=self.padding,
return_tensors="pt",
)
labels = label_features["input_ids"]
attention_mask = label_features["attention_mask"]
# Replace padding with -100 to ignore these tokens during loss calculation
labels = labels.masked_fill(attention_mask.ne(1), -100)
batch["labels"] = labels
return batch
```
But now a strange thing is happening, no matter how much I increase the batch size, the amount of V-RAM GPU usage does not change, while the number of total steps in the progress-bar (logging) changes. Is this normal or have I made a mistake?
### Steps to reproduce the bug
i can share my code if needed
### Expected behavior
Equal to the batch size value, the set_transform function is applied to the dataset and given to the model as a batch.
### Environment info
all updated versions | null | https://api.github.com/repos/huggingface/datasets/issues/7070/timeline | null | null | VafaKnm | 103,993,288 | U_kgDOBjLPyA | https://avatars.githubusercontent.com/u/103993288?v=4 | https://api.github.com/users/VafaKnm | https://github.com/VafaKnm | https://api.github.com/users/VafaKnm/followers | https://api.github.com/users/VafaKnm/following{/other_user} | https://api.github.com/users/VafaKnm/gists{/gist_id} | https://api.github.com/users/VafaKnm/starred{/owner}{/repo} | https://api.github.com/users/VafaKnm/subscriptions | https://api.github.com/users/VafaKnm/orgs | https://api.github.com/users/VafaKnm/repos | https://api.github.com/users/VafaKnm/events{/privacy} | https://api.github.com/users/VafaKnm/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7070/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |
https://api.github.com/repos/huggingface/datasets/issues/7067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7067/comments | https://api.github.com/repos/huggingface/datasets/issues/7067/events | https://github.com/huggingface/datasets/issues/7067 | 2,425,460,168 | I_kwDODunzps6QkZXI | 7,067 | Convert_to_parquet fails for datasets with multiple configs | [] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 3 | 1,721 | 1,722 | 1,722 | NONE | null | null | If the dataset has multiple configs, when using the `datasets-cli convert_to_parquet` command to avoid issues with the data viewer caused by loading scripts, the conversion process only successfully converts the data corresponding to the first config. When it starts converting the second config, it throws an error:
```
Traceback (most recent call last):
File "/opt/anaconda3/envs/dl/bin/datasets-cli", line 8, in <module>
sys.exit(main())
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/datasets_cli.py", line 41, in main
service.run()
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/commands/convert_to_parquet.py", line 83, in run
dataset.push_to_hub(
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/datasets/dataset_dict.py", line 1713, in push_to_hub
api.create_branch(repo_id, branch=revision, token=token, repo_type="dataset", exist_ok=True)
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 5503, in create_branch
hf_raise_for_status(response)
File "/opt/anaconda3/envs/dl/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 358, in hf_raise_for_status
raise BadRequestError(message, response=response) from e
huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-669fc665-7c2e80d75f4337496ee95402;731fcdc7-0950-4eec-99cf-ce047b8d003f)
Bad request:
Invalid reference for a branch: refs/pr/1
``` | null | https://api.github.com/repos/huggingface/datasets/issues/7067/timeline | null | completed | HuangZhen02 | 97,585,031 | U_kgDOBdEHhw | https://avatars.githubusercontent.com/u/97585031?v=4 | https://api.github.com/users/HuangZhen02 | https://github.com/HuangZhen02 | https://api.github.com/users/HuangZhen02/followers | https://api.github.com/users/HuangZhen02/following{/other_user} | https://api.github.com/users/HuangZhen02/gists{/gist_id} | https://api.github.com/users/HuangZhen02/starred{/owner}{/repo} | https://api.github.com/users/HuangZhen02/subscriptions | https://api.github.com/users/HuangZhen02/orgs | https://api.github.com/users/HuangZhen02/repos | https://api.github.com/users/HuangZhen02/events{/privacy} | https://api.github.com/users/HuangZhen02/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7067/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Many users have encountered the same issue, which has caused inconvenience.\r\n\r\nhttps://discuss.huggingface.co/t/convert-to-parquet-fails-for-datasets-with-multiple-configs/86733\r\n",
"Thanks for reporting.\r\n\r\nI will make the code more robust.",
"I have opened an issue in the huggingface-hub repo:\r\n-... | |||
https://api.github.com/repos/huggingface/datasets/issues/7066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7066/comments | https://api.github.com/repos/huggingface/datasets/issues/7066/events | https://github.com/huggingface/datasets/issues/7066 | 2,425,125,160 | I_kwDODunzps6QjHko | 7,066 | One subset per file in repo ? | [] | open | false | null | [] | null | 1 | 1,721 | 1,750 | null | MEMBER | null | null | Right now we consider all the files of a dataset to be the same data, e.g.
```
single_subset_dataset/
├── train0.jsonl
├── train1.jsonl
└── train2.jsonl
```
but in cases like this, each file is actually a different subset of the dataset and should be loaded separately
```
many_subsets_dataset/
├── animals.jsonl
├── trees.jsonl
└── metadata.jsonl
```
It would be nice to detect those subsets automatically using a simple heuristic. For example we can group files together if their paths names are the same except some digits ? | null | https://api.github.com/repos/huggingface/datasets/issues/7066/timeline | null | null | lhoestq | 42,851,186 | MDQ6VXNlcjQyODUxMTg2 | https://avatars.githubusercontent.com/u/42851186?v=4 | https://api.github.com/users/lhoestq | https://github.com/lhoestq | https://api.github.com/users/lhoestq/followers | https://api.github.com/users/lhoestq/following{/other_user} | https://api.github.com/users/lhoestq/gists{/gist_id} | https://api.github.com/users/lhoestq/starred{/owner}{/repo} | https://api.github.com/users/lhoestq/subscriptions | https://api.github.com/users/lhoestq/orgs | https://api.github.com/users/lhoestq/repos | https://api.github.com/users/lhoestq/events{/privacy} | https://api.github.com/users/lhoestq/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7066/reactions | 3 | 3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Hi @lhoestq! I’ve opened a PR that addresses this issue"
] | |
https://api.github.com/repos/huggingface/datasets/issues/7065 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7065/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7065/comments | https://api.github.com/repos/huggingface/datasets/issues/7065/events | https://github.com/huggingface/datasets/issues/7065 | 2,424,734,953 | I_kwDODunzps6QhoTp | 7,065 | Cannot get item after loading from disk and then converting to iterable. | [] | open | false | null | [] | null | 0 | 1,721 | 1,721 | null | NONE | null | null | ### Describe the bug
The dataset generated from local file works fine.
```py
root = "/home/data/train"
file_list1 = glob(os.path.join(root, "*part1.flac"))
file_list2 = glob(os.path.join(root, "*part2.flac"))
ds = (
Dataset.from_dict({"part1": file_list1, "part2": file_list2})
.cast_column("part1", Audio(sampling_rate=None, mono=False))
.cast_column("part2", Audio(sampling_rate=None, mono=False))
)
ids = ds.to_iterable_dataset(128)
ids = ids.shuffle(buffer_size=10000, seed=42)
dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True)
for batch in dataloader:
break
```
But after saving it to disk and then loading it from disk, I cannot get data as expected.
```py
root = "/home/data/train"
file_list1 = glob(os.path.join(root, "*part1.flac"))
file_list2 = glob(os.path.join(root, "*part2.flac"))
ds = (
Dataset.from_dict({"part1": file_list1, "part2": file_list2})
.cast_column("part1", Audio(sampling_rate=None, mono=False))
.cast_column("part2", Audio(sampling_rate=None, mono=False))
)
ds.save_to_disk("./train")
ds = datasets.load_from_disk("./train")
ids = ds.to_iterable_dataset(128)
ids = ids.shuffle(buffer_size=10000, seed=42)
dataloader = DataLoader(ids, num_workers=4, batch_size=8, persistent_workers=True)
for batch in dataloader:
break
```
After a long time waiting, an error occurs:
```
Loading dataset from disk: 100%|█████████████████████████████████████████████████████████████████████████| 165/165 [00:00<00:00, 6422.18it/s]
Traceback (most recent call last):
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1133, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/queues.py", line 113, in get
if not self._poll(timeout):
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 257, in poll
return self._poll(timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 424, in _poll
r = wait([self], timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/selectors.py", line 416, in select
fd_event_list = self._selector.poll(timeout)
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 3490529) is killed by signal: Killed.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/home/hanzerui/.vscode-server/extensions/ms-python.debugpy-2024.9.12011011/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "/home/hanzerui/workspace/NetEase/test/test_datasets.py", line 60, in <module>
for batch in dataloader:
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 631, in __next__
data = self._next_data()
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1329, in _next_data
idx, data = self._get_data()
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1295, in _get_data
success, data = self._try_get_data()
File "/home/hanzerui/.conda/envs/mss/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1146, in _try_get_data
raise RuntimeError(f'DataLoader worker (pid(s) {pids_str}) exited unexpectedly') from e
RuntimeError: DataLoader worker (pid(s) 3490529) exited unexpectedly
```
It seems that streaming is not supported by `laod_from_disk`, so does that mean I cannot convert it to iterable?
### Steps to reproduce the bug
1. Create a `Dataset` from local files with `from_dict`
2. Save it to disk with `save_to_disk`
3. Load it from disk with `load_from_disk`
4. Convert to iterable with `to_iterable_dataset`
5. Loop the dataset
### Expected behavior
Get items faster than the original dataset generated from dict.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.10.14
- `huggingface_hub` version: 0.23.2
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | null | https://api.github.com/repos/huggingface/datasets/issues/7065/timeline | null | null | happyTonakai | 21,305,646 | MDQ6VXNlcjIxMzA1NjQ2 | https://avatars.githubusercontent.com/u/21305646?v=4 | https://api.github.com/users/happyTonakai | https://github.com/happyTonakai | https://api.github.com/users/happyTonakai/followers | https://api.github.com/users/happyTonakai/following{/other_user} | https://api.github.com/users/happyTonakai/gists{/gist_id} | https://api.github.com/users/happyTonakai/starred{/owner}{/repo} | https://api.github.com/users/happyTonakai/subscriptions | https://api.github.com/users/happyTonakai/orgs | https://api.github.com/users/happyTonakai/repos | https://api.github.com/users/happyTonakai/events{/privacy} | https://api.github.com/users/happyTonakai/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7065/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |
https://api.github.com/repos/huggingface/datasets/issues/7063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7063/comments | https://api.github.com/repos/huggingface/datasets/issues/7063/events | https://github.com/huggingface/datasets/issues/7063 | 2,424,488,648 | I_kwDODunzps6QgsLI | 7,063 | Add `batch` method to `Dataset` | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 0 | 1,721 | 1,721 | 1,721 | CONTRIBUTOR | null | null | ### Feature request
Add a `batch` method to the Dataset class, similar to the one recently implemented for `IterableDataset` in PR #7054.
### Motivation
A batched iteration speeds up data loading significantly (see e.g. #6279)
### Your contribution
I plan to open a PR to implement this. | null | https://api.github.com/repos/huggingface/datasets/issues/7063/timeline | null | completed | lappemic | 61,876,623 | MDQ6VXNlcjYxODc2NjIz | https://avatars.githubusercontent.com/u/61876623?v=4 | https://api.github.com/users/lappemic | https://github.com/lappemic | https://api.github.com/users/lappemic/followers | https://api.github.com/users/lappemic/following{/other_user} | https://api.github.com/users/lappemic/gists{/gist_id} | https://api.github.com/users/lappemic/starred{/owner}{/repo} | https://api.github.com/users/lappemic/subscriptions | https://api.github.com/users/lappemic/orgs | https://api.github.com/users/lappemic/repos | https://api.github.com/users/lappemic/events{/privacy} | https://api.github.com/users/lappemic/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7063/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | lhoestq | 42,851,186 | MDQ6VXNlcjQyODUxMTg2 | https://avatars.githubusercontent.com/u/42851186?v=4 | https://api.github.com/users/lhoestq | https://github.com/lhoestq | https://api.github.com/users/lhoestq/followers | https://api.github.com/users/lhoestq/following{/other_user} | https://api.github.com/users/lhoestq/gists{/gist_id} | https://api.github.com/users/lhoestq/starred{/owner}{/repo} | https://api.github.com/users/lhoestq/subscriptions | https://api.github.com/users/lhoestq/orgs | https://api.github.com/users/lhoestq/repos | https://api.github.com/users/lhoestq/events{/privacy} | https://api.github.com/users/lhoestq/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | ||
https://api.github.com/repos/huggingface/datasets/issues/7061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7061/comments | https://api.github.com/repos/huggingface/datasets/issues/7061/events | https://github.com/huggingface/datasets/issues/7061 | 2,423,786,881 | I_kwDODunzps6QeA2B | 7,061 | Custom Dataset | Still Raise Error while handling errors in _generate_examples | [] | open | false | null | [] | null | 0 | 1,721 | 1,725 | null | NONE | null | null | ### Describe the bug
I follow this [example](https://discuss.huggingface.co/t/error-handling-in-iterabledataset/72827/3) to handle errors in custom dataset. I am writing a dataset script which read jsonl files and i need to handle errors and continue reading files without raising exception and exit the execution.
```
def _generate_examples(self, filepaths):
errors=[]
id_ = 0
for filepath in filepaths:
try:
with open(filepath, 'r') as f:
for line in f:
json_obj = json.loads(line)
yield id_, json_obj
id_ += 1
except Exception as exc:
logger.error(f"error occur at filepath: {filepath}")
errors.append(error)
```
seems the logger.error is printed but still exception is raised the the run is exit.
```
Downloading and preparing dataset custom_dataset/default to /home/myuser/.cache/huggingface/datasets/custom_dataset/default-a14cdd566afee0a6/1.0.0/acfcc9fb9c57034b580c4252841
ERROR: datasets_modules.datasets.custom_dataset.acfcc9fb9c57034b580c4252841bb890a5617cbd28678dd4be5e52b81188ad02.custom_dataset: 2024-07-22 10:47:42,167: error occur at filepath: '/home/myuser/ds/corrupted-file.jsonl
Traceback (most recent call last):
File "/home/myuser/.cache/huggingface/modules/datasets_modules/datasets/custom_dataset/ac..2/custom_dataset.py", line 48, in _generate_examples
json_obj = json.loads(line)
File "myenv/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "myenv/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "myenv/lib/python3.8/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Invalid control character at: line 1 column 4 (char 3)
Generating train split: 0 examples [00:06, ? examples/s]>
RemoteTraceback:
"""
Traceback (most recent call last):
File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1637, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "myenv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 594, in finalize
raise SchemaInferenceError("Please pass `features` or at least one example when writing data")
datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "myenv/lib/python3.8/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "myenv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 1353, in
_write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
File "myenv/lib/python3.8/site-packages/datasets/builder.py", line 1646, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
"""
The above exception was the direct cause of the following exception:
│ │
│ myenv/lib/python3.8/site-packages/datasets/utils/py_utils. │
│ py:1377 in <listcomp> │
│ │
│ 1374 │ │ │ │ if all(async_result.ready() for async_result in async_results) and queue │
│ 1375 │ │ │ │ │ break │
│ 1376 │ │ # we get the result in case there's an error to raise │
│ ❱ 1377 │ │ [async_result.get() for async_result in async_results] │
│ 1378 │
│ │
│ ╭──────────────────────────────── locals ─────────────────────────────────╮ │
│ │ .0 = <list_iterator object at 0x7f2cc1f0ce20> │ │
│ │ async_result = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> │ │
│ ╰─────────────────────────────────────────────────────────────────────────╯ │
│ │
│ myenv/lib/python3.8/site-packages/multiprocess/pool.py:771 │
│ in get │
│ │
│ 768 │ │ if self._success: │
│ 769 │ │ │ return self._value │
│ 770 │ │ else: │
│ ❱ 771 │ │ │ raise self._value │
│ 772 │ │
│ 773 │ def _set(self, i, obj): │
│ 774 │ │ self._success, self._value = obj │
│ │
│ ╭────────────────────────────── locals ──────────────────────────────╮ │
│ │ self = <multiprocess.pool.ApplyResult object at 0x7f2cc1f79c10> │ │
│ │ timeout = None │ │
│ ╰────────────────────────────────────────────────────────────────────╯ │
DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
same as above
### Expected behavior
should handle error and continue reading remaining files
### Environment info
python 3.9 | null | https://api.github.com/repos/huggingface/datasets/issues/7061/timeline | null | null | hahmad2008 | 68,266,028 | MDQ6VXNlcjY4MjY2MDI4 | https://avatars.githubusercontent.com/u/68266028?v=4 | https://api.github.com/users/hahmad2008 | https://github.com/hahmad2008 | https://api.github.com/users/hahmad2008/followers | https://api.github.com/users/hahmad2008/following{/other_user} | https://api.github.com/users/hahmad2008/gists{/gist_id} | https://api.github.com/users/hahmad2008/starred{/owner}{/repo} | https://api.github.com/users/hahmad2008/subscriptions | https://api.github.com/users/hahmad2008/orgs | https://api.github.com/users/hahmad2008/repos | https://api.github.com/users/hahmad2008/events{/privacy} | https://api.github.com/users/hahmad2008/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7061/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |
https://api.github.com/repos/huggingface/datasets/issues/7059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7059/comments | https://api.github.com/repos/huggingface/datasets/issues/7059/events | https://github.com/huggingface/datasets/issues/7059 | 2,422,827,892 | I_kwDODunzps6QaWt0 | 7,059 | None values are skipped when reading jsonl in subobjects | [] | open | false | null | [] | null | 0 | 1,721 | 1,721 | null | NONE | null | null | ### Describe the bug
I have been fighting against my machine since this morning only to find out this is some kind of a bug.
When loading a dataset composed of `metadata.jsonl`, if you have nullable values (Optional[str]), they can be ignored by the parser, shifting things around.
E.g., let's take this example
Here are two version of a same dataset:
[not-buggy.tar.gz](https://github.com/user-attachments/files/16333532/not-buggy.tar.gz)
[buggy.tar.gz](https://github.com/user-attachments/files/16333553/buggy.tar.gz)
### Steps to reproduce the bug
1. Load the `buggy.tar.gz` dataset
2. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]`
3. Load the `not-buggy.tar.gz` dataset
4. Print baseline of `dts = load_dataset("./data")["train"][0]["baselines]`
### Expected behavior
Both should have 4 baseline entries:
1. Buggy should have None followed by three lists
2. Non-Buggy should have four lists, and the first one should be an empty list.
One does not work, 2 works. Despite accepting None in another position than the first one.
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.0
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1
| null | https://api.github.com/repos/huggingface/datasets/issues/7059/timeline | null | null | PonteIneptique | 1,929,830 | MDQ6VXNlcjE5Mjk4MzA= | https://avatars.githubusercontent.com/u/1929830?v=4 | https://api.github.com/users/PonteIneptique | https://github.com/PonteIneptique | https://api.github.com/users/PonteIneptique/followers | https://api.github.com/users/PonteIneptique/following{/other_user} | https://api.github.com/users/PonteIneptique/gists{/gist_id} | https://api.github.com/users/PonteIneptique/starred{/owner}{/repo} | https://api.github.com/users/PonteIneptique/subscriptions | https://api.github.com/users/PonteIneptique/orgs | https://api.github.com/users/PonteIneptique/repos | https://api.github.com/users/PonteIneptique/events{/privacy} | https://api.github.com/users/PonteIneptique/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7059/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |
https://api.github.com/repos/huggingface/datasets/issues/7058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7058/comments | https://api.github.com/repos/huggingface/datasets/issues/7058/events | https://github.com/huggingface/datasets/issues/7058 | 2,422,560,355 | I_kwDODunzps6QZVZj | 7,058 | New feature type: Document | [] | open | false | null | [] | null | 0 | 1,721 | 1,721 | null | COLLABORATOR | null | null | It would be useful for PDF.
https://github.com/huggingface/dataset-viewer/issues/2991#issuecomment-2242656069 | null | https://api.github.com/repos/huggingface/datasets/issues/7058/timeline | null | null | severo | 1,676,121 | MDQ6VXNlcjE2NzYxMjE= | https://avatars.githubusercontent.com/u/1676121?v=4 | https://api.github.com/users/severo | https://github.com/severo | https://api.github.com/users/severo/followers | https://api.github.com/users/severo/following{/other_user} | https://api.github.com/users/severo/gists{/gist_id} | https://api.github.com/users/severo/starred{/owner}{/repo} | https://api.github.com/users/severo/subscriptions | https://api.github.com/users/severo/orgs | https://api.github.com/users/severo/repos | https://api.github.com/users/severo/events{/privacy} | https://api.github.com/users/severo/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7058/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |
https://api.github.com/repos/huggingface/datasets/issues/7055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7055/comments | https://api.github.com/repos/huggingface/datasets/issues/7055/events | https://github.com/huggingface/datasets/issues/7055 | 2,421,708,891 | I_kwDODunzps6QWFhb | 7,055 | WebDataset with different prefixes are unsupported | [] | closed | false | null | [] | null | 8 | 1,721 | 1,721 | 1,721 | NONE | null | null | ### Describe the bug
Consider a WebDataset with multiple images for each item where the number of images may vary: [example](https://huggingface.co/datasets/bigdata-pw/fashion-150k)
Due to this [code](https://github.com/huggingface/datasets/blob/87f4c2088854ff33e817e724e75179e9975c1b02/src/datasets/packaged_modules/webdataset/webdataset.py#L76-L80) an error is given.
```
The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
```
The purpose of this check is unclear because PyArrow supports different keys.
Removing the check allows the dataset to be loaded and there's no issue when iterating through the dataset.
```
>>> from datasets import load_dataset
>>> path = "shards/*.tar"
>>> dataset = load_dataset("webdataset", data_files={"train": path}, split="train", streaming=True)
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 152/152 [00:00<00:00, 56458.93it/s]
>>> dataset
IterableDataset({
features: ['__key__', '__url__', '1.jpg', '2.jpg', '3.jpg', '4.jpg', 'json'],
n_shards: 152
})
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("bigdata-pw/fashion-150k")
```
### Expected behavior
Dataset loads without error
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.14.0-467.el9.x86_64-x86_64-with-glibc2.34
- Python version: 3.9.19
- `huggingface_hub` version: 0.23.4
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | null | https://api.github.com/repos/huggingface/datasets/issues/7055/timeline | null | completed | hlky | 106,811,348 | U_kgDOBl3P1A | https://avatars.githubusercontent.com/u/106811348?v=4 | https://api.github.com/users/hlky | https://github.com/hlky | https://api.github.com/users/hlky/followers | https://api.github.com/users/hlky/following{/other_user} | https://api.github.com/users/hlky/gists{/gist_id} | https://api.github.com/users/hlky/starred{/owner}{/repo} | https://api.github.com/users/hlky/subscriptions | https://api.github.com/users/hlky/orgs | https://api.github.com/users/hlky/repos | https://api.github.com/users/hlky/events{/privacy} | https://api.github.com/users/hlky/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7055/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | hlky | 106,811,348 | U_kgDOBl3P1A | https://avatars.githubusercontent.com/u/106811348?v=4 | https://api.github.com/users/hlky | https://github.com/hlky | https://api.github.com/users/hlky/followers | https://api.github.com/users/hlky/following{/other_user} | https://api.github.com/users/hlky/gists{/gist_id} | https://api.github.com/users/hlky/starred{/owner}{/repo} | https://api.github.com/users/hlky/subscriptions | https://api.github.com/users/hlky/orgs | https://api.github.com/users/hlky/repos | https://api.github.com/users/hlky/events{/privacy} | https://api.github.com/users/hlky/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Since `datasets` uses is built on Arrow to store the data, it requires each sample to have the same columns.\r\n\r\nThis can be fixed by specifyign in advance the name of all the possible columns in the `dataset_info` in YAML, and missing values will be `None`",
"Thanks. This currently doesn't work for WebDatase... | ||
https://api.github.com/repos/huggingface/datasets/issues/7053 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7053/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7053/comments | https://api.github.com/repos/huggingface/datasets/issues/7053/events | https://github.com/huggingface/datasets/issues/7053 | 2,416,423,791 | I_kwDODunzps6QB7Nv | 7,053 | Datasets.datafiles resolve_pattern `TypeError: can only concatenate tuple (not "str") to tuple` | [] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 2 | 1,721 | 1,721 | 1,721 | NONE | null | null | ### Describe the bug
in data_files.py, line 332,
`fs, _, _ = get_fs_token_paths(pattern, storage_options=storage_options)`
If we run the code on AWS, as fs.protocol will be a tuple like: `('file', 'local')`
So, `isinstance(fs.protocol, str) == False` and
`protocol_prefix = fs.protocol + "://" if fs.protocol != "file" else ""` will raise
`TypeError: can only concatenate tuple (not "str") to tuple`.
### Steps to reproduce the bug
Steps to reproduce:
1. Run on a cloud server like AWS,
2. `import datasets.data_files as datafile`
3. datafile.resolve_pattern('path/to/dataset', '.')
4. `TypeError: can only concatenate tuple (not "str") to tuple`
### Expected behavior
Should return path of the dataset, with fs.protocol at the beginning
### Environment info
- `datasets` version: 2.14.0
- Platform: Linux-3.10.0-1160.119.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.19
- Huggingface_hub version: 0.23.5
- PyArrow version: 16.1.0
- Pandas version: 1.1.5 | null | https://api.github.com/repos/huggingface/datasets/issues/7053/timeline | null | completed | MatthewYZhang | 48,289,218 | MDQ6VXNlcjQ4Mjg5MjE4 | https://avatars.githubusercontent.com/u/48289218?v=4 | https://api.github.com/users/MatthewYZhang | https://github.com/MatthewYZhang | https://api.github.com/users/MatthewYZhang/followers | https://api.github.com/users/MatthewYZhang/following{/other_user} | https://api.github.com/users/MatthewYZhang/gists{/gist_id} | https://api.github.com/users/MatthewYZhang/starred{/owner}{/repo} | https://api.github.com/users/MatthewYZhang/subscriptions | https://api.github.com/users/MatthewYZhang/orgs | https://api.github.com/users/MatthewYZhang/repos | https://api.github.com/users/MatthewYZhang/events{/privacy} | https://api.github.com/users/MatthewYZhang/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7053/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Hi,\r\n\r\nThis issue was fixed in `datasets` 2.15.0:\r\n- #6105\r\n\r\nYou will need to update your `datasets`:\r\n```\r\npip install -U datasets\r\n```",
"Duplicate of:\r\n- #6100"
] | |||
https://api.github.com/repos/huggingface/datasets/issues/7051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7051/comments | https://api.github.com/repos/huggingface/datasets/issues/7051/events | https://github.com/huggingface/datasets/issues/7051 | 2,409,353,929 | I_kwDODunzps6Pm9LJ | 7,051 | How to set_epoch with interleave_datasets? | [] | closed | false | null | [] | null | 7 | 1,721 | 1,722 | 1,722 | NONE | null | null | Let's say I have dataset A which has 100k examples, and dataset B which has 100m examples.
I want to train on an interleaved dataset of A+B, with stopping_strategy='all_exhausted' so dataset B doesn't repeat any examples. But every time A is exhausted I want it to be reshuffled (eg. calling set_epoch)
Of course I want to interleave as IterableDatasets / streaming mode so B doesn't have to get tokenized completely at the start.
How could I achieve this? I was thinking something like, if I wrap dataset A in some new IterableDataset with from_generator() and manually call set_epoch before interleaving it? But I'm not sure how to keep the number of shards in that dataset...
Something like
```
dataset_a = load_dataset(...)
dataset_b = load_dataset(...)
def epoch_shuffled_dataset(ds):
# How to make this maintain the number of shards in ds??
for epoch in itertools.count():
ds.set_epoch(epoch)
yield from iter(ds)
shuffled_dataset_a = IterableDataset.from_generator(epoch_shuffled_dataset, gen_kwargs={'ds': dataset_a})
interleaved = interleave_datasets([shuffled_dataset_a, dataset_b], probs, stopping_strategy='all_exhausted')
``` | null | https://api.github.com/repos/huggingface/datasets/issues/7051/timeline | null | completed | jonathanasdf | 511,073 | MDQ6VXNlcjUxMTA3Mw== | https://avatars.githubusercontent.com/u/511073?v=4 | https://api.github.com/users/jonathanasdf | https://github.com/jonathanasdf | https://api.github.com/users/jonathanasdf/followers | https://api.github.com/users/jonathanasdf/following{/other_user} | https://api.github.com/users/jonathanasdf/gists{/gist_id} | https://api.github.com/users/jonathanasdf/starred{/owner}{/repo} | https://api.github.com/users/jonathanasdf/subscriptions | https://api.github.com/users/jonathanasdf/orgs | https://api.github.com/users/jonathanasdf/repos | https://api.github.com/users/jonathanasdf/events{/privacy} | https://api.github.com/users/jonathanasdf/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7051/reactions | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 | null | null | null | null | null | null | jonathanasdf | 511,073 | MDQ6VXNlcjUxMTA3Mw== | https://avatars.githubusercontent.com/u/511073?v=4 | https://api.github.com/users/jonathanasdf | https://github.com/jonathanasdf | https://api.github.com/users/jonathanasdf/followers | https://api.github.com/users/jonathanasdf/following{/other_user} | https://api.github.com/users/jonathanasdf/gists{/gist_id} | https://api.github.com/users/jonathanasdf/starred{/owner}{/repo} | https://api.github.com/users/jonathanasdf/subscriptions | https://api.github.com/users/jonathanasdf/orgs | https://api.github.com/users/jonathanasdf/repos | https://api.github.com/users/jonathanasdf/events{/privacy} | https://api.github.com/users/jonathanasdf/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"This is not possible right now afaik :/\r\n\r\nMaybe we could have something like this ? wdyt ?\r\n\r\n```python\r\nds = interleave_datasets(\r\n [shuffled_dataset_a, dataset_b],\r\n probabilities=probabilities,\r\n stopping_strategy='all_exhausted',\r\n reshuffle_each_iteration=True,\r\n)",
"That wo... | ||
https://api.github.com/repos/huggingface/datasets/issues/7049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7049/comments | https://api.github.com/repos/huggingface/datasets/issues/7049/events | https://github.com/huggingface/datasets/issues/7049 | 2,408,514,366 | I_kwDODunzps6PjwM- | 7,049 | Save nparray as list | [] | closed | false | null | [] | null | 5 | 1,721 | 1,721 | 1,721 | NONE | null | null | ### Describe the bug
When I use the `map` function to convert images into features, datasets saves nparray as a list. Some people use the `set_format` function to convert the column back, but doesn't this lose precision?
### Steps to reproduce the bug
the map function
```python
def convert_image_to_features(inst, processor, image_dir):
image_file = inst["image_url"]
file = image_file.split("/")[-1]
image_path = os.path.join(image_dir, file)
image = Image.open(image_path)
image = image.convert("RGBA")
inst["pixel_values"] = processor(images=image, return_tensors="np")["pixel_values"]
return inst
```
main function
```python
map_fun = partial(
convert_image_to_features, processor=processor, image_dir=image_dir
)
ds = ds.map(map_fun, batched=False, num_proc=20)
print(type(ds[0]["pixel_values"])
```
### Expected behavior
(type < list>)
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-4.19.91-009.ali4000.alios7.x86_64-x86_64-with-glibc2.35
- Python version: 3.11.5
- `huggingface_hub` version: 0.23.4
- PyArrow version: 14.0.2
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0 | null | https://api.github.com/repos/huggingface/datasets/issues/7049/timeline | null | completed | Sakurakdx | 48,399,040 | MDQ6VXNlcjQ4Mzk5MDQw | https://avatars.githubusercontent.com/u/48399040?v=4 | https://api.github.com/users/Sakurakdx | https://github.com/Sakurakdx | https://api.github.com/users/Sakurakdx/followers | https://api.github.com/users/Sakurakdx/following{/other_user} | https://api.github.com/users/Sakurakdx/gists{/gist_id} | https://api.github.com/users/Sakurakdx/starred{/owner}{/repo} | https://api.github.com/users/Sakurakdx/subscriptions | https://api.github.com/users/Sakurakdx/orgs | https://api.github.com/users/Sakurakdx/repos | https://api.github.com/users/Sakurakdx/events{/privacy} | https://api.github.com/users/Sakurakdx/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7049/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | Sakurakdx | 48,399,040 | MDQ6VXNlcjQ4Mzk5MDQw | https://avatars.githubusercontent.com/u/48399040?v=4 | https://api.github.com/users/Sakurakdx | https://github.com/Sakurakdx | https://api.github.com/users/Sakurakdx/followers | https://api.github.com/users/Sakurakdx/following{/other_user} | https://api.github.com/users/Sakurakdx/gists{/gist_id} | https://api.github.com/users/Sakurakdx/starred{/owner}{/repo} | https://api.github.com/users/Sakurakdx/subscriptions | https://api.github.com/users/Sakurakdx/orgs | https://api.github.com/users/Sakurakdx/repos | https://api.github.com/users/Sakurakdx/events{/privacy} | https://api.github.com/users/Sakurakdx/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"In addition, when I use `set_format ` and index the ds, the following error occurs:\r\nthe code\r\n```python\r\nds.set_format(type=\"np\", colums=\"pixel_values\")\r\n```\r\nerror\r\n<img width=\"918\" alt=\"image\" src=\"https://github.com/user-attachments/assets/b28bbff2-20ea-4d28-ab62-b4ed2d944996\">\r\n",
">... | ||
https://api.github.com/repos/huggingface/datasets/issues/7048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7048/comments | https://api.github.com/repos/huggingface/datasets/issues/7048/events | https://github.com/huggingface/datasets/issues/7048 | 2,408,487,547 | I_kwDODunzps6Pjpp7 | 7,048 | ImportError: numpy.core.multiarray when using `filter` | [] | closed | false | null | [] | null | 4 | 1,721 | 1,721 | 1,721 | NONE | null | null | ### Describe the bug
I can't apply the filter method on my dataset.
### Steps to reproduce the bug
The following snippet generates a bug:
```python
from datasets import load_dataset
ami = load_dataset('kamilakesbi/ami', 'ihm')
ami['train'].filter(
lambda example: example["file_name"] == 'EN2001a'
)
```
I get the following error:
`ImportError: numpy.core.multiarray failed to import (auto-generated because you didn't call 'numpy.import_array()' after cimporting numpy; use '<void>numpy._import_array' to disable if you are certain you don't need it).`
### Expected behavior
It should work properly!
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | null | https://api.github.com/repos/huggingface/datasets/issues/7048/timeline | null | completed | kamilakesbi | 45,195,979 | MDQ6VXNlcjQ1MTk1OTc5 | https://avatars.githubusercontent.com/u/45195979?v=4 | https://api.github.com/users/kamilakesbi | https://github.com/kamilakesbi | https://api.github.com/users/kamilakesbi/followers | https://api.github.com/users/kamilakesbi/following{/other_user} | https://api.github.com/users/kamilakesbi/gists{/gist_id} | https://api.github.com/users/kamilakesbi/starred{/owner}{/repo} | https://api.github.com/users/kamilakesbi/subscriptions | https://api.github.com/users/kamilakesbi/orgs | https://api.github.com/users/kamilakesbi/repos | https://api.github.com/users/kamilakesbi/events{/privacy} | https://api.github.com/users/kamilakesbi/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7048/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | kamilakesbi | 45,195,979 | MDQ6VXNlcjQ1MTk1OTc5 | https://avatars.githubusercontent.com/u/45195979?v=4 | https://api.github.com/users/kamilakesbi | https://github.com/kamilakesbi | https://api.github.com/users/kamilakesbi/followers | https://api.github.com/users/kamilakesbi/following{/other_user} | https://api.github.com/users/kamilakesbi/gists{/gist_id} | https://api.github.com/users/kamilakesbi/starred{/owner}{/repo} | https://api.github.com/users/kamilakesbi/subscriptions | https://api.github.com/users/kamilakesbi/orgs | https://api.github.com/users/kamilakesbi/repos | https://api.github.com/users/kamilakesbi/events{/privacy} | https://api.github.com/users/kamilakesbi/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Could you please check your `numpy` version?",
"I got this issue while using numpy version 2.0. \r\n\r\nI solved it by switching back to numpy 1.26.0 :) ",
"We recently added support for numpy 2.0, but it is not released yet.",
"Ok I see, thanks! I think we can close this issue for now as switching back to v... | ||
https://api.github.com/repos/huggingface/datasets/issues/7047 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7047/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7047/comments | https://api.github.com/repos/huggingface/datasets/issues/7047/events | https://github.com/huggingface/datasets/issues/7047 | 2,406,495,084 | I_kwDODunzps6PcDNs | 7,047 | Save Dataset as Sharded Parquet | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 4 | 1,720 | 1,759 | null | NONE | null | null | ### Feature request
`to_parquet` currently saves the dataset as one massive, monolithic parquet file, rather than as several small parquet files. It should shard large datasets automatically.
### Motivation
This default behavior makes me very sad because a program I ran for 6 hours saved its results using `to_parquet`, putting the entire billion+ row dataset into a 171 GB *single shard parquet file* which pyarrow, apache spark, etc. all cannot work with without completely exhausting the memory of my system. I was previously able to work with larger-than-memory parquet files, but not this one. I *assume* the reason why this is happening is because it is a single shard. Making sharding the default behavior puts datasets in parity with other frameworks, such as spark, which automatically shard when a large dataset is saved as parquet.
### Your contribution
I could change the logic here https://github.com/huggingface/datasets/blob/bf6f41e94d9b2f1c620cf937a2e85e5754a8b960/src/datasets/io/parquet.py#L109-L158
to use `pyarrow.dataset.write_dataset`, which seems to support sharding, or periodically open new files. We would only shard if the user passed in a path rather than file handle. | null | https://api.github.com/repos/huggingface/datasets/issues/7047/timeline | null | null | tom-p-reichel | 43,631,024 | MDQ6VXNlcjQzNjMxMDI0 | https://avatars.githubusercontent.com/u/43631024?v=4 | https://api.github.com/users/tom-p-reichel | https://github.com/tom-p-reichel | https://api.github.com/users/tom-p-reichel/followers | https://api.github.com/users/tom-p-reichel/following{/other_user} | https://api.github.com/users/tom-p-reichel/gists{/gist_id} | https://api.github.com/users/tom-p-reichel/starred{/owner}{/repo} | https://api.github.com/users/tom-p-reichel/subscriptions | https://api.github.com/users/tom-p-reichel/orgs | https://api.github.com/users/tom-p-reichel/repos | https://api.github.com/users/tom-p-reichel/events{/privacy} | https://api.github.com/users/tom-p-reichel/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7047/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"To anyone else who finds themselves in this predicament, it's possible to read the parquet file in the same way that datasets writes it, and then manually break it into pieces. Although, you need a couple of magic options (`thrift_*`) to deal with the huge metadata, otherwise pyarrow immediately crashes.\r\n```pyt... | |
https://api.github.com/repos/huggingface/datasets/issues/7041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7041/comments | https://api.github.com/repos/huggingface/datasets/issues/7041/events | https://github.com/huggingface/datasets/issues/7041 | 2,404,576,038 | I_kwDODunzps6PUusm | 7,041 | `sort` after `filter` unreasonably slow | [] | closed | false | null | [] | null | 2 | 1,720 | 1,745 | 1,745 | NONE | null | null | ### Describe the bug
as the tittle says ...
### Steps to reproduce the bug
`sort` seems to be normal.
```python
from datasets import Dataset
import random
nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)]
ds = Dataset.from_list(nums)
print("start sort")
ds = ds.sort("k")
print("finish sort")
```
but `sort` after `filter` is extremely slow.
```python
from datasets import Dataset
import random
nums = [{"k":random.choice(range(0,1000))} for _ in range(100000)]
ds = Dataset.from_list(nums)
ds = ds.filter(lambda x:x > 100, input_columns="k")
print("start sort")
ds = ds.sort("k")
print("finish sort")
```
### Expected behavior
Is this a bug, or is it a misuse of the `sort` function?
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.13
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2023.10.0 | null | https://api.github.com/repos/huggingface/datasets/issues/7041/timeline | null | completed | Tobin-rgb | 56,711,045 | MDQ6VXNlcjU2NzExMDQ1 | https://avatars.githubusercontent.com/u/56711045?v=4 | https://api.github.com/users/Tobin-rgb | https://github.com/Tobin-rgb | https://api.github.com/users/Tobin-rgb/followers | https://api.github.com/users/Tobin-rgb/following{/other_user} | https://api.github.com/users/Tobin-rgb/gists{/gist_id} | https://api.github.com/users/Tobin-rgb/starred{/owner}{/repo} | https://api.github.com/users/Tobin-rgb/subscriptions | https://api.github.com/users/Tobin-rgb/orgs | https://api.github.com/users/Tobin-rgb/repos | https://api.github.com/users/Tobin-rgb/events{/privacy} | https://api.github.com/users/Tobin-rgb/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7041/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | Tobin-rgb | 56,711,045 | MDQ6VXNlcjU2NzExMDQ1 | https://avatars.githubusercontent.com/u/56711045?v=4 | https://api.github.com/users/Tobin-rgb | https://github.com/Tobin-rgb | https://api.github.com/users/Tobin-rgb/followers | https://api.github.com/users/Tobin-rgb/following{/other_user} | https://api.github.com/users/Tobin-rgb/gists{/gist_id} | https://api.github.com/users/Tobin-rgb/starred{/owner}{/repo} | https://api.github.com/users/Tobin-rgb/subscriptions | https://api.github.com/users/Tobin-rgb/orgs | https://api.github.com/users/Tobin-rgb/repos | https://api.github.com/users/Tobin-rgb/events{/privacy} | https://api.github.com/users/Tobin-rgb/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"`filter` add an indices mapping on top of the dataset, so `sort` has to gather all the rows that are kept to form a new Arrow table and sort the table. Gathering all the rows can take some time, but is a necessary step. You can try calling `ds = ds.flatten_indices()` before sorting to remove the indices mapping.",... | ||
https://api.github.com/repos/huggingface/datasets/issues/7040 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7040/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7040/comments | https://api.github.com/repos/huggingface/datasets/issues/7040/events | https://github.com/huggingface/datasets/issues/7040 | 2,402,918,335 | I_kwDODunzps6POZ-_ | 7,040 | load `streaming=True` dataset with downloaded cache | [] | open | false | null | [] | null | 2 | 1,720 | 1,720 | null | NONE | null | null | ### Describe the bug
We build a dataset which contains several hdf5 files and write a script using `h5py` to generate the dataset. The hdf5 files are large and the processed dataset cache takes more disk space. So we hope to try streaming iterable dataset. Unfortunately, `h5py` can't convert a remote URL into a hdf5 file descriptor. So we use `fsspec` as an interface like below:
```python
def _generate_examples(self, filepath, split):
for file in filepath:
with fsspec.open(file, "rb") as fs:
with h5py.File(fs, "r") as fp:
# for event_id in sorted(list(fp.keys())):
event_ids = list(fp.keys())
......
```
### Steps to reproduce the bug
The `fsspec` works, but it takes 10+ min to print the first 10 examples, which is even longer than the downloading time. I'm not sure if it just caches the whole hdf5 file and generates the examples.
### Expected behavior
So does the following make sense so far?
1. download the files
```python
dataset = datasets.load('path/to/myscripts', split="train", name="event", trust_remote_code=True)
```
2. load the iterable dataset faster (using the raw file cache at path `.cache/huggingface/datasets/downloads`)
```python
dataset = datasets.load('path/to/myscripts', split="train", name="event", trust_remote_code=True, streaming=true)
```
I made some tests, but the code above can't get the expected result. I'm not sure if this is supported. I also find the issue #6327 . It seemed similar to mine, but I couldn't find a solution.
### Environment info
- `datasets` = 2.18.0
- `h5py` = 3.10.0
- `fsspec` = 2023.10.0 | null | https://api.github.com/repos/huggingface/datasets/issues/7040/timeline | null | null | wanghaoyucn | 39,429,965 | MDQ6VXNlcjM5NDI5OTY1 | https://avatars.githubusercontent.com/u/39429965?v=4 | https://api.github.com/users/wanghaoyucn | https://github.com/wanghaoyucn | https://api.github.com/users/wanghaoyucn/followers | https://api.github.com/users/wanghaoyucn/following{/other_user} | https://api.github.com/users/wanghaoyucn/gists{/gist_id} | https://api.github.com/users/wanghaoyucn/starred{/owner}{/repo} | https://api.github.com/users/wanghaoyucn/subscriptions | https://api.github.com/users/wanghaoyucn/orgs | https://api.github.com/users/wanghaoyucn/repos | https://api.github.com/users/wanghaoyucn/events{/privacy} | https://api.github.com/users/wanghaoyucn/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7040/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"When you pass `streaming=True`, the cache is ignored. The remote data URL is used instead and the data is streamed from the remote server.",
"Thanks for your reply! So is there any solution to get my expected behavior besides clone the whole repo ? Or could I adjust my script to load the downloaded arrow files a... | |
https://api.github.com/repos/huggingface/datasets/issues/7037 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7037/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7037/comments | https://api.github.com/repos/huggingface/datasets/issues/7037/events | https://github.com/huggingface/datasets/issues/7037 | 2,400,192,419 | I_kwDODunzps6PEAej | 7,037 | A bug of Dataset.to_json() function | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 2 | 1,720 | 1,727 | null | NONE | null | null | ### Describe the bug
When using the Dataset.to_json() function, an unexpected error occurs if the parameter is set to lines=False. The stored data should be in the form of a list, but it actually turns into multiple lists, which causes an error when reading the data again.
The reason is that to_json() writes to the file in several segments based on the batch size. This is not a problem when lines=True, but it is incorrect when lines=False, because writing in several times will produce multiple lists(when len(dataset) > batch_size).
### Steps to reproduce the bug
try this code:
```python
from datasets import load_dataset
import json
train_dataset = load_dataset("Anthropic/hh-rlhf", data_dir="harmless-base")["train"]
output_path = "./harmless-base_hftojs.json"
print(len(train_dataset))
train_dataset.to_json(output_path, lines=False, force_ascii=False, indent=2)
with open(output_path, encoding="utf-8") as f:
data = json.loads(f.read())
```
it raise error: json.decoder.JSONDecodeError: Extra data: line 4003 column 1 (char 1373709)
Extra square brackets have appeared here:
<img width="265" alt="image" src="https://github.com/huggingface/datasets/assets/26499566/81492332-386d-42e8-88d1-b6d4ae3682cc">
### Expected behavior
The code runs normally.
### Environment info
datasets=2.20.0 | null | https://api.github.com/repos/huggingface/datasets/issues/7037/timeline | null | null | LinglingGreat | 26,499,566 | MDQ6VXNlcjI2NDk5NTY2 | https://avatars.githubusercontent.com/u/26499566?v=4 | https://api.github.com/users/LinglingGreat | https://github.com/LinglingGreat | https://api.github.com/users/LinglingGreat/followers | https://api.github.com/users/LinglingGreat/following{/other_user} | https://api.github.com/users/LinglingGreat/gists{/gist_id} | https://api.github.com/users/LinglingGreat/starred{/owner}{/repo} | https://api.github.com/users/LinglingGreat/subscriptions | https://api.github.com/users/LinglingGreat/orgs | https://api.github.com/users/LinglingGreat/repos | https://api.github.com/users/LinglingGreat/events{/privacy} | https://api.github.com/users/LinglingGreat/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7037/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Thanks for reporting, @LinglingGreat.\r\n\r\nI confirm this is a bug.",
"@albertvillanova I would like to take a shot at this if you aren't working on it currently. Let me know!"
] | ||
https://api.github.com/repos/huggingface/datasets/issues/7035 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7035/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7035/comments | https://api.github.com/repos/huggingface/datasets/issues/7035/events | https://github.com/huggingface/datasets/issues/7035 | 2,400,021,225 | I_kwDODunzps6PDWrp | 7,035 | Docs are not generated when a parameter defaults to a NamedSplit value | [
{
"id": 4296013012,
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance",
"name": "maintenance",
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks"
}
] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 1,720 | 1,721 | 1,721 | MEMBER | null | null | While generating the docs, we get an error when some parameter defaults to a `NamedSplit` value, like:
```python
def call_function(split=Split.TRAIN):
...
```
The error is: ValueError: Equality not supported between split train and <class 'inspect._empty'>
See: https://github.com/huggingface/datasets/actions/runs/9869660902/job/27254359863?pr=7015
```
Building the MDX files: 97%|█████████▋| 58/60 [00:00<00:00, 91.94it/s]
Traceback (most recent call last):
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 197, in build_mdx_files
content, new_anchors, source_files, errors = resolve_autodoc(
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 123, in resolve_autodoc
doc = autodoc(
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 499, in autodoc
method_doc, check = document_object(
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 395, in document_object
signature = format_signature(obj)
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/autodoc.py", line 126, in format_signature
if param.default != inspect._empty:
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/datasets/splits.py", line 136, in __ne__
return not self.__eq__(other)
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/datasets/splits.py", line 379, in __eq__
raise ValueError(f"Equality not supported between split {self} and {other}")
ValueError: Equality not supported between split train and <class 'inspect._empty'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/runner/work/datasets/datasets/.venv/bin/doc-builder", line 8, in <module>
sys.exit(main())
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/commands/doc_builder_cli.py", line 47, in main
args.func(args)
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/commands/build.py", line 102, in build_command
build_doc(
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 367, in build_doc
anchors_mapping, source_files_mapping = build_mdx_files(
File "/home/runner/work/datasets/datasets/.venv/lib/python3.10/site-packages/doc_builder/build_doc.py", line 230, in build_mdx_files
raise type(e)(f"There was an error when converting {file} to the MDX format.\n" + e.args[0]) from e
ValueError: There was an error when converting ../datasets/docs/source/package_reference/main_classes.mdx to the MDX format.
Equality not supported between split train and <class 'inspect._empty'>
``` | null | https://api.github.com/repos/huggingface/datasets/issues/7035/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7035/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |||
https://api.github.com/repos/huggingface/datasets/issues/7033 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7033/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7033/comments | https://api.github.com/repos/huggingface/datasets/issues/7033/events | https://github.com/huggingface/datasets/issues/7033 | 2,397,419,768 | I_kwDODunzps6O5bj4 | 7,033 | `from_generator` does not allow to specify the split name | [] | closed | false | null | [] | null | 2 | 1,720 | 1,721 | 1,721 | CONTRIBUTOR | null | null | ### Describe the bug
I'm building train, dev, and test using `from_generator`; however, in all three cases, the logger prints `Generating train split:`
It's not possible to change the split name since it seems to be hardcoded: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/generator/generator.py
### Steps to reproduce the bug
```
In [1]: from datasets import Dataset
In [2]: def gen():
...: yield {"pokemon": "bulbasaur", "type": "grass"}
...:
In [3]: ds = Dataset.from_generator(gen)
Generating train split: 1 examples [00:00, 133.89 examples/s]
```
### Expected behavior
It should be possible to specify any split name
### Environment info
- `datasets` version: 2.19.2
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.5
- `huggingface_hub` version: 0.23.3
- PyArrow version: 15.0.0
- Pandas version: 2.0.3
- `fsspec` version: 2023.10.0 | null | https://api.github.com/repos/huggingface/datasets/issues/7033/timeline | null | completed | pminervini | 227,357 | MDQ6VXNlcjIyNzM1Nw== | https://avatars.githubusercontent.com/u/227357?v=4 | https://api.github.com/users/pminervini | https://github.com/pminervini | https://api.github.com/users/pminervini/followers | https://api.github.com/users/pminervini/following{/other_user} | https://api.github.com/users/pminervini/gists{/gist_id} | https://api.github.com/users/pminervini/starred{/owner}{/repo} | https://api.github.com/users/pminervini/subscriptions | https://api.github.com/users/pminervini/orgs | https://api.github.com/users/pminervini/repos | https://api.github.com/users/pminervini/events{/privacy} | https://api.github.com/users/pminervini/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7033/reactions | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Thanks for reporting, @pminervini.\r\n\r\nI agree we should give the option to define the split name.\r\n\r\nIndeed, there is a PR that addresses precisely this issue:\r\n- #7015\r\n\r\nI am reviewing it.",
"Booom! thank you guys :)"
] | ||
https://api.github.com/repos/huggingface/datasets/issues/7031 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7031/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7031/comments | https://api.github.com/repos/huggingface/datasets/issues/7031/events | https://github.com/huggingface/datasets/issues/7031 | 2,395,401,692 | I_kwDODunzps6Oxu3c | 7,031 | CI quality is broken: use ruff check instead | [] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 1,720 | 1,720 | 1,720 | MEMBER | null | null | CI quality is broken: https://github.com/huggingface/datasets/actions/runs/9838873879/job/27159697027
```
error: `ruff <path>` has been removed. Use `ruff check <path>` instead.
``` | null | https://api.github.com/repos/huggingface/datasets/issues/7031/timeline | null | not_planned | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7031/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |||
https://api.github.com/repos/huggingface/datasets/issues/7030 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7030/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7030/comments | https://api.github.com/repos/huggingface/datasets/issues/7030/events | https://github.com/huggingface/datasets/issues/7030 | 2,393,411,631 | I_kwDODunzps6OqJAv | 7,030 | Add option to disable progress bar when reading a dataset ("Loading dataset from disk") | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 1,720 | 1,720 | 1,720 | NONE | null | null | ### Feature request
Add an option in load_from_disk to disable the progress bar even if the number of files is larger than 16.
### Motivation
I am reading a lot of datasets that it creates lots of logs.
<img width="1432" alt="image" src="https://github.com/huggingface/datasets/assets/57996478/8d4bbf03-6b89-44b6-937c-932f01b4eb2a">
### Your contribution
Seems like an easy fix to make. I can create a PR if necessary. | null | https://api.github.com/repos/huggingface/datasets/issues/7030/timeline | null | completed | yuvalkirstain | 57,996,478 | MDQ6VXNlcjU3OTk2NDc4 | https://avatars.githubusercontent.com/u/57996478?v=4 | https://api.github.com/users/yuvalkirstain | https://github.com/yuvalkirstain | https://api.github.com/users/yuvalkirstain/followers | https://api.github.com/users/yuvalkirstain/following{/other_user} | https://api.github.com/users/yuvalkirstain/gists{/gist_id} | https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo} | https://api.github.com/users/yuvalkirstain/subscriptions | https://api.github.com/users/yuvalkirstain/orgs | https://api.github.com/users/yuvalkirstain/repos | https://api.github.com/users/yuvalkirstain/events{/privacy} | https://api.github.com/users/yuvalkirstain/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7030/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | yuvalkirstain | 57,996,478 | MDQ6VXNlcjU3OTk2NDc4 | https://avatars.githubusercontent.com/u/57996478?v=4 | https://api.github.com/users/yuvalkirstain | https://github.com/yuvalkirstain | https://api.github.com/users/yuvalkirstain/followers | https://api.github.com/users/yuvalkirstain/following{/other_user} | https://api.github.com/users/yuvalkirstain/gists{/gist_id} | https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo} | https://api.github.com/users/yuvalkirstain/subscriptions | https://api.github.com/users/yuvalkirstain/orgs | https://api.github.com/users/yuvalkirstain/repos | https://api.github.com/users/yuvalkirstain/events{/privacy} | https://api.github.com/users/yuvalkirstain/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"You can disable progress bars for all of `datasets` with `disable_progress_bars`. [Link](https://huggingface.co/docs/datasets/en/package_reference/utilities#datasets.enable_progress_bars)\r\n\r\nSo you could do something like:\r\n\r\n```python\r\nfrom datasets import load_from_disk, enable_progress_bars, disable_p... | ||
https://api.github.com/repos/huggingface/datasets/issues/7029 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7029/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7029/comments | https://api.github.com/repos/huggingface/datasets/issues/7029/events | https://github.com/huggingface/datasets/issues/7029 | 2,391,366,696 | I_kwDODunzps6OiVwo | 7,029 | load_dataset on AWS lambda throws OSError(30, 'Read-only file system') error | [] | open | false | null | [] | null | 1 | 1,720 | 1,721 | null | NONE | null | null | ### Describe the bug
I'm using AWS lambda to run a python application. I run the `load_dataset` function with cache_dir="/tmp" and is still throws the OSError(30, 'Read-only file system') error. Is even updated all the HF envs to point to /tmp dir but the issue still persists. I can confirm that the I can write to /tmp directory.
### Steps to reproduce the bug
```python
d = load_dataset(
path=hugging_face_link,
split=split,
token=token,
cache_dir="/tmp/hugging_face_cache",
)
```
### Expected behavior
Everything written to the file system as part of the load_datasets function should be in the /tmp directory.
### Environment info
datasets version: 2.16.1
Platform: Linux-5.10.216-225.855.amzn2.x86_64-x86_64-with-glibc2.26
Python version: 3.11.9
huggingface_hub version: 0.19.4
PyArrow version: 16.1.0
Pandas version: 2.2.2
fsspec version: 2023.10.0 | null | https://api.github.com/repos/huggingface/datasets/issues/7029/timeline | null | null | sugam-nexusflow | 171,606,538 | U_kgDOCjqCCg | https://avatars.githubusercontent.com/u/171606538?v=4 | https://api.github.com/users/sugam-nexusflow | https://github.com/sugam-nexusflow | https://api.github.com/users/sugam-nexusflow/followers | https://api.github.com/users/sugam-nexusflow/following{/other_user} | https://api.github.com/users/sugam-nexusflow/gists{/gist_id} | https://api.github.com/users/sugam-nexusflow/starred{/owner}{/repo} | https://api.github.com/users/sugam-nexusflow/subscriptions | https://api.github.com/users/sugam-nexusflow/orgs | https://api.github.com/users/sugam-nexusflow/repos | https://api.github.com/users/sugam-nexusflow/events{/privacy} | https://api.github.com/users/sugam-nexusflow/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7029/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"hi ! can you share the full stack trace ? this should help locate what files is not written in the cache_dir"
] | |
https://api.github.com/repos/huggingface/datasets/issues/7024 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7024/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7024/comments | https://api.github.com/repos/huggingface/datasets/issues/7024/events | https://github.com/huggingface/datasets/issues/7024 | 2,390,141,626 | I_kwDODunzps6Odqq6 | 7,024 | Streaming dataset not returning data | [] | open | false | null | [] | null | 0 | 1,720 | 1,720 | null | NONE | null | null | ### Describe the bug
I'm deciding to post here because I'm still not sure what the issue is, or if I am using IterableDatasets wrongly.
I'm following the guide on here https://huggingface.co/learn/cookbook/en/fine_tuning_code_llm_on_single_gpu pretty much to a tee and have verified that it works when I'm fine-tuning on the provided dataset.
However, I'm doing some data preprocessing steps (filtering out entries), when I try to swap out the dataset for mine, it fails to train. However, I eventually fixed this by simply setting `stream=False` in `load_dataset`.
Coud this be some sort of network / firewall issue I'm facing?
### Steps to reproduce the bug
I made a post with greater description about how I reproduced this problem before I found my workaround: https://discuss.huggingface.co/t/problem-with-custom-iterator-of-streaming-dataset-not-returning-anything/94551
Here is the problematic dataset snippet, which works when streaming=False (and with buffer keyword removed from shuffle)
```
commitpackft = load_dataset(
"chargoddard/commitpack-ft-instruct", split="train", streaming=True
).filter(lambda example: example["language"] == "Python")
def form_template(example):
"""Forms a template for each example following the alpaca format for CommitPack"""
example["content"] = (
"### Human: " + example["instruction"] + " " + example["input"] + " ### Assistant: " + example["output"]
)
return example
dataset = commitpackft.map(
form_template,
remove_columns=["id", "language", "license", "instruction", "input", "output"],
).shuffle(
seed=42, buffer_size=10000
) # remove everything since its all inside "content" now
validation_data = dataset.take(4000)
train_data = dataset.skip(4000)
```
The annoying part about this is that it only fails during training and I don't know when it will fail, except that it always fails during evaluation.
### Expected behavior
The expected behavior is that I should be able to get something from the iterator when called instead of getting nothing / stuck in a loop somewhere.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.31
- Python version: 3.11.7
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0
| null | https://api.github.com/repos/huggingface/datasets/issues/7024/timeline | null | null | johnwee1 | 91,670,254 | U_kgDOBXbG7g | https://avatars.githubusercontent.com/u/91670254?v=4 | https://api.github.com/users/johnwee1 | https://github.com/johnwee1 | https://api.github.com/users/johnwee1/followers | https://api.github.com/users/johnwee1/following{/other_user} | https://api.github.com/users/johnwee1/gists{/gist_id} | https://api.github.com/users/johnwee1/starred{/owner}{/repo} | https://api.github.com/users/johnwee1/subscriptions | https://api.github.com/users/johnwee1/orgs | https://api.github.com/users/johnwee1/repos | https://api.github.com/users/johnwee1/events{/privacy} | https://api.github.com/users/johnwee1/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7024/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |
https://api.github.com/repos/huggingface/datasets/issues/7022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7022/comments | https://api.github.com/repos/huggingface/datasets/issues/7022/events | https://github.com/huggingface/datasets/issues/7022 | 2,388,064,650 | I_kwDODunzps6OVvmK | 7,022 | There is dead code after we require pyarrow >= 15.0.0 | [
{
"id": 4296013012,
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance",
"name": "maintenance",
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks"
}
] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 1,719 | 1,719 | 1,719 | MEMBER | null | null | There are code lines specific for pyarrow versions < 15.0.0.
However, we require pyarrow >= 15.0.0 since the merge of PR:
- #6892
Those code lines are now dead code and should be removed. | null | https://api.github.com/repos/huggingface/datasets/issues/7022/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7022/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |||
https://api.github.com/repos/huggingface/datasets/issues/7020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7020/comments | https://api.github.com/repos/huggingface/datasets/issues/7020/events | https://github.com/huggingface/datasets/issues/7020 | 2,387,940,990 | I_kwDODunzps6OVRZ- | 7,020 | Casting list array to fixed size list raises error | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 1,719 | 1,719 | 1,719 | MEMBER | null | null | When trying to cast a list array to fixed size list, an AttributeError is raised:
> AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length'
Steps to reproduce the bug:
```python
import pyarrow as pa
from datasets.table import array_cast
arr = pa.array([[0, 1]])
array_cast(arr, pa.list_(pa.int64(), 2))
```
Stack trace:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-12-6cb90a1d8216> in <module>
3
4 arr = pa.array([[0, 1]])
----> 5 array_cast(arr, pa.list_(pa.int64(), 2))
~/huggingface/datasets/src/datasets/table.py in wrapper(array, *args, **kwargs)
1802 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1803 else:
-> 1804 return func(array, *args, **kwargs)
1805
1806 return wrapper
~/huggingface/datasets/src/datasets/table.py in array_cast(array, pa_type, allow_primitive_to_str, allow_decimal_to_str)
1920 else:
1921 array_values = array.values[
-> 1922 array.offset * pa_type.length : (array.offset + len(array)) * pa_type.length
1923 ]
1924 return pa.FixedSizeListArray.from_arrays(_c(array_values, pa_type.value_type), pa_type.list_size)
AttributeError: 'pyarrow.lib.FixedSizeListType' object has no attribute 'length'
``` | null | https://api.github.com/repos/huggingface/datasets/issues/7020/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7020/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |||
https://api.github.com/repos/huggingface/datasets/issues/7018 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7018/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7018/comments | https://api.github.com/repos/huggingface/datasets/issues/7018/events | https://github.com/huggingface/datasets/issues/7018 | 2,383,700,286 | I_kwDODunzps6OFGE- | 7,018 | `load_dataset` fails to load dataset saved by `save_to_disk` | [] | open | false | null | [] | null | 5 | 1,719 | 1,748 | null | NONE | null | null | ### Describe the bug
This code fails to load the dataset it just saved:
```python
from datasets import load_dataset
from transformers import AutoTokenizer
MODEL = "google-bert/bert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
dataset = load_dataset("yelp_review_full")
def tokenize_function(examples):
return tokenizer(examples["text"], padding="max_length", truncation=True)
tokenized_datasets = dataset.map(tokenize_function, batched=True)
tokenized_datasets.save_to_disk("dataset")
tokenized_datasets = load_dataset("dataset/") # raises
```
It raises `ValueError: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('test'): ('json', {})}`.
I believe this bug is caused by the [logic that tries to infer dataset format](https://github.com/huggingface/datasets/blob/9af8dd3de7626183a9a9ec8973cebc672d690400/src/datasets/load.py#L556). It counts the most common file extension. However, a small dataset can fit in a single `.arrow` file and have two JSON metadata files, causing the format to be inferred as JSON:
```shell
$ ls -l dataset/test
-rw-r--r-- 1 sliedes sliedes 191498784 Jul 1 13:55 data-00000-of-00001.arrow
-rw-r--r-- 1 sliedes sliedes 1730 Jul 1 13:55 dataset_info.json
-rw-r--r-- 1 sliedes sliedes 249 Jul 1 13:55 state.json
```
### Steps to reproduce the bug
Execute the code above.
### Expected behavior
The dataset is loaded successfully.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-6.9.3-arch1-1-x86_64-with-glibc2.39
- Python version: 3.12.4
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0
| null | https://api.github.com/repos/huggingface/datasets/issues/7018/timeline | null | null | sliedes | 2,307,997 | MDQ6VXNlcjIzMDc5OTc= | https://avatars.githubusercontent.com/u/2307997?v=4 | https://api.github.com/users/sliedes | https://github.com/sliedes | https://api.github.com/users/sliedes/followers | https://api.github.com/users/sliedes/following{/other_user} | https://api.github.com/users/sliedes/gists{/gist_id} | https://api.github.com/users/sliedes/starred{/owner}{/repo} | https://api.github.com/users/sliedes/subscriptions | https://api.github.com/users/sliedes/orgs | https://api.github.com/users/sliedes/repos | https://api.github.com/users/sliedes/events{/privacy} | https://api.github.com/users/sliedes/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7018/reactions | 4 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"In my case the error was:\r\n```\r\nValueError: You are trying to load a dataset that was saved using `save_to_disk`. Please use `load_from_disk` instead.\r\n```\r\nDid you try `load_from_disk`?",
"More generally, any reason there is no API consistency between save_to_disk and push_to_hub ? \r\n\r\nWould be nice... | |
https://api.github.com/repos/huggingface/datasets/issues/7016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7016/comments | https://api.github.com/repos/huggingface/datasets/issues/7016/events | https://github.com/huggingface/datasets/issues/7016 | 2,383,262,608 | I_kwDODunzps6ODbOQ | 7,016 | `drop_duplicates` method | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 1935892871,
"... | open | false | null | [] | null | 1 | 1,719 | 1,721 | null | NONE | null | null | ### Feature request
`drop_duplicates` method for huggingface datasets (similiar in simplicity to the `pandas` one)
### Motivation
Ease of use
### Your contribution
I don't think i am good enough to help | null | https://api.github.com/repos/huggingface/datasets/issues/7016/timeline | null | null | MohamedAliRashad | 26,205,298 | MDQ6VXNlcjI2MjA1Mjk4 | https://avatars.githubusercontent.com/u/26205298?v=4 | https://api.github.com/users/MohamedAliRashad | https://github.com/MohamedAliRashad | https://api.github.com/users/MohamedAliRashad/followers | https://api.github.com/users/MohamedAliRashad/following{/other_user} | https://api.github.com/users/MohamedAliRashad/gists{/gist_id} | https://api.github.com/users/MohamedAliRashad/starred{/owner}{/repo} | https://api.github.com/users/MohamedAliRashad/subscriptions | https://api.github.com/users/MohamedAliRashad/orgs | https://api.github.com/users/MohamedAliRashad/repos | https://api.github.com/users/MohamedAliRashad/events{/privacy} | https://api.github.com/users/MohamedAliRashad/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7016/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"There is an open issue #2514 about this which also proposes solutions."
] | |
https://api.github.com/repos/huggingface/datasets/issues/7013 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7013/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7013/comments | https://api.github.com/repos/huggingface/datasets/issues/7013/events | https://github.com/huggingface/datasets/issues/7013 | 2,382,976,738 | I_kwDODunzps6OCVbi | 7,013 | CI is broken for faiss tests on Windows: node down: Not properly terminated | [
{
"id": 4296013012,
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance",
"name": "maintenance",
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks"
}
] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 1,719 | 1,719 | 1,719 | MEMBER | null | null | Faiss tests on Windows make the CI run indefinitely until maximum execution time (360 minutes) is reached.
See: https://github.com/huggingface/datasets/actions/runs/9712659783
```
test (integration, windows-latest, deps-minimum)
The job running on runner GitHub Actions 60 has exceeded the maximum execution time of 360 minutes.
test (integration, windows-latest, deps-latest)
The job running on runner GitHub Actions 238 has exceeded the maximum execution time of 360 minutes.
```
```
____________________________ tests/test_search.py _____________________________
[gw1] win32 -- Python 3.8.10 C:\hostedtoolcache\windows\Python\3.8.10\x64\python.exe
worker 'gw1' crashed while running 'tests/test_search.py::IndexableDatasetTest::test_add_faiss_index'
____________________________ tests/test_search.py _____________________________
[gw2] win32 -- Python 3.8.10 C:\hostedtoolcache\windows\Python\3.8.10\x64\python.exe
worker 'gw2' crashed while running 'tests/test_search.py::IndexableDatasetTest::test_add_faiss_index'
```
```
tests/test_search.py::IndexableDatasetTest::test_add_faiss_index
[gw0] node down: Not properly terminated
[gw0] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index
replacing crashed worker gw0
tests/test_search.py::IndexableDatasetTest::test_add_faiss_index
[gw1] node down: Not properly terminated
[gw1] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index
replacing crashed worker gw1
tests/test_search.py::IndexableDatasetTest::test_add_faiss_index
[gw2] node down: Not properly terminated
[gw2] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index
replacing crashed worker gw2
``` | null | https://api.github.com/repos/huggingface/datasets/issues/7013/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7013/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |||
https://api.github.com/repos/huggingface/datasets/issues/7010 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7010/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7010/comments | https://api.github.com/repos/huggingface/datasets/issues/7010/events | https://github.com/huggingface/datasets/issues/7010 | 2,379,777,480 | I_kwDODunzps6N2IXI | 7,010 | Re-enable raising error from huggingface-hub FutureWarning in CI | [
{
"id": 4296013012,
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance",
"name": "maintenance",
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks"
}
] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 1,719 | 1,719 | 1,719 | MEMBER | null | null | Re-enable raising error from huggingface-hub FutureWarning in CI, which was disabled by PR:
- #6876
Note that this can only be done once transformers releases the fix:
- https://github.com/huggingface/transformers/pull/31007 | null | https://api.github.com/repos/huggingface/datasets/issues/7010/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7010/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |||
https://api.github.com/repos/huggingface/datasets/issues/7008 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7008/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7008/comments | https://api.github.com/repos/huggingface/datasets/issues/7008/events | https://github.com/huggingface/datasets/issues/7008 | 2,379,591,141 | I_kwDODunzps6N1a3l | 7,008 | Support ruff 0.5.0 in CI | [
{
"id": 4296013012,
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance",
"name": "maintenance",
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks"
}
] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 1,719 | 1,719 | 1,719 | MEMBER | null | null | Support ruff 0.5.0 in CI.
Also revert:
- #7007 | null | https://api.github.com/repos/huggingface/datasets/issues/7008/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7008/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |||
https://api.github.com/repos/huggingface/datasets/issues/7006 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7006/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7006/comments | https://api.github.com/repos/huggingface/datasets/issues/7006/events | https://github.com/huggingface/datasets/issues/7006 | 2,379,581,543 | I_kwDODunzps6N1Yhn | 7,006 | CI is broken after ruff-0.5.0: E721 | [
{
"id": 4296013012,
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance",
"name": "maintenance",
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks"
}
] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 1,719 | 1,719 | 1,719 | MEMBER | null | null | After ruff-0.5.0 release (https://github.com/astral-sh/ruff/releases/tag/0.5.0), our CI is broken due to E721 rule.
See: https://github.com/huggingface/datasets/actions/runs/9707641618/job/26793170961?pr=6983
> src/datasets/features/features.py:844:12: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks | null | https://api.github.com/repos/huggingface/datasets/issues/7006/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7006/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |||
https://api.github.com/repos/huggingface/datasets/issues/7005 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7005/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7005/comments | https://api.github.com/repos/huggingface/datasets/issues/7005/events | https://github.com/huggingface/datasets/issues/7005 | 2,378,424,349 | I_kwDODunzps6Nw-Ad | 7,005 | EmptyDatasetError: The directory at /metadata.jsonl doesn't contain any data files | [] | closed | false | null | [] | null | 3 | 1,719 | 1,719 | 1,719 | NONE | null | null | ### Describe the bug
while trying to load custom dataset from jsonl file, I get the error: "metadata.jsonl doesn't contain any data files"
### Steps to reproduce the bug
This is my [metadata_v2.jsonl](https://github.com/user-attachments/files/16016011/metadata_v2.json) file. I have this file in the folder with all images mentioned in that json(l) file.
Through below mentioned command I am trying to load_dataset so that I can upload it as mentioned here on the [official website](https://huggingface.co/docs/datasets/en/image_dataset#upload-dataset-to-the-hub).
````
from datasets import load_dataset
dataset = load_dataset("imagefolder", data_dir="path/to/jsonl/metadata.jsonl")
````
error:
````
EmptyDatasetError Traceback (most recent call last)
Cell In[18], line 3
1 from datasets import load_dataset
----> 3 dataset = load_dataset("imagefolder",
4 data_dir="path/to/jsonl/file/metadata.jsonl")
5 dataset[0]["objects"]
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2594, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2589 verification_mode = VerificationMode(
2590 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
2591 )
2593 # Create a dataset builder
-> 2594 builder_instance = load_dataset_builder(
2595 path=path,
2596 name=name,
2597 data_dir=data_dir,
2598 data_files=data_files,
2599 cache_dir=cache_dir,
2600 features=features,
2601 download_config=download_config,
2602 download_mode=download_mode,
2603 revision=revision,
2604 token=token,
2605 storage_options=storage_options,
2606 trust_remote_code=trust_remote_code,
2607 _require_default_config_name=name is None,
2608 **config_kwargs,
2609 )
2611 # Return iterable dataset in case of streaming
2612 if streaming:
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:2266, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)
2264 download_config = download_config.copy() if download_config else DownloadConfig()
2265 download_config.storage_options.update(storage_options)
-> 2266 dataset_module = dataset_module_factory(
2267 path,
2268 revision=revision,
2269 download_config=download_config,
2270 download_mode=download_mode,
2271 data_dir=data_dir,
2272 data_files=data_files,
2273 cache_dir=cache_dir,
2274 trust_remote_code=trust_remote_code,
2275 _require_default_config_name=_require_default_config_name,
2276 _require_custom_configs=bool(config_kwargs),
2277 )
2278 # Get dataset builder class from the processing script
2279 builder_kwargs = dataset_module.builder_kwargs
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1805, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)
1782 # We have several ways to get a dataset builder:
1783 #
1784 # - if path is the name of a packaged dataset module
(...)
1796
1797 # Try packaged
1798 if path in _PACKAGED_DATASETS_MODULES:
1799 return PackagedDatasetModuleFactory(
1800 path,
1801 data_dir=data_dir,
1802 data_files=data_files,
1803 download_config=download_config,
1804 download_mode=download_mode,
-> 1805 ).get_module()
1806 # Try locally
1807 elif path.endswith(filename):
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/load.py:1140, in PackagedDatasetModuleFactory.get_module(self)
1135 def get_module(self) -> DatasetModule:
1136 base_path = Path(self.data_dir or "").expanduser().resolve().as_posix()
1137 patterns = (
1138 sanitize_patterns(self.data_files)
1139 if self.data_files is not None
-> 1140 else get_data_patterns(base_path, download_config=self.download_config)
1141 )
1142 data_files = DataFilesDict.from_patterns(
1143 patterns,
1144 download_config=self.download_config,
1145 base_path=base_path,
1146 )
1147 supports_metadata = self.name in _MODULE_SUPPORTS_METADATA
File ~/anaconda3/envs/lvis/lib/python3.11/site-packages/datasets/data_files.py:503, in get_data_patterns(base_path, download_config)
501 return _get_data_files_patterns(resolver)
502 except FileNotFoundError:
--> 503 raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None
EmptyDatasetError: The directory at path/to/jsonl/file/metadata.jsonl doesn't contain any data files`
```
### Expected behavior
It should be able load the whole file in a format of "dataset" inside the dataset variable. But it gives error "The directory at "path/to/jsonl/metadata.jsonl" doesn't contain any data files."
### Environment info
I am using conda environment. | null | https://api.github.com/repos/huggingface/datasets/issues/7005/timeline | null | completed | Aki1991 | 117,731,544 | U_kgDOBwRw2A | https://avatars.githubusercontent.com/u/117731544?v=4 | https://api.github.com/users/Aki1991 | https://github.com/Aki1991 | https://api.github.com/users/Aki1991/followers | https://api.github.com/users/Aki1991/following{/other_user} | https://api.github.com/users/Aki1991/gists{/gist_id} | https://api.github.com/users/Aki1991/starred{/owner}{/repo} | https://api.github.com/users/Aki1991/subscriptions | https://api.github.com/users/Aki1991/orgs | https://api.github.com/users/Aki1991/repos | https://api.github.com/users/Aki1991/events{/privacy} | https://api.github.com/users/Aki1991/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7005/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | Aki1991 | 117,731,544 | U_kgDOBwRw2A | https://avatars.githubusercontent.com/u/117731544?v=4 | https://api.github.com/users/Aki1991 | https://github.com/Aki1991 | https://api.github.com/users/Aki1991/followers | https://api.github.com/users/Aki1991/following{/other_user} | https://api.github.com/users/Aki1991/gists{/gist_id} | https://api.github.com/users/Aki1991/starred{/owner}{/repo} | https://api.github.com/users/Aki1991/subscriptions | https://api.github.com/users/Aki1991/orgs | https://api.github.com/users/Aki1991/repos | https://api.github.com/users/Aki1991/events{/privacy} | https://api.github.com/users/Aki1991/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Hi ! `data_dir=` is for directories, can you try using `data_files=` instead ?",
"If you are trying to load your image dataset from a local folder, you should replace \"data_dir=path/to/jsonl/metadata.jsonl\" with the real folder path in your computer.\r\n\r\nhttps://huggingface.co/docs/datasets/en/image_load#im... | ||
https://api.github.com/repos/huggingface/datasets/issues/7001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7001/comments | https://api.github.com/repos/huggingface/datasets/issues/7001/events | https://github.com/huggingface/datasets/issues/7001 | 2,372,930,879 | I_kwDODunzps6NcA0_ | 7,001 | Datasetbuilder Local Download FileNotFoundError | [] | open | false | null | [] | null | 1 | 1,719 | 1,719 | null | NONE | null | null | ### Describe the bug
So I was trying to download a dataset and save it as parquet and I follow the [tutorial](https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage) of Huggingface. However, during the excution I face a FileNotFoundError.
I debug the code and it seems there is a bug there:
So first it creates a .incomplete folder and before moving its contents the following code deletes the directory
[Code](https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/builder.py#L984)
hence as a result I face with:
``` FileNotFoundError: [Errno 2] No such file or directory: '~/data/Parquet/.incomplete '```
### Steps to reproduce the bug
```
from datasets import load_dataset_builder
from pathlib import Path
parquet_dir = "~/data/Parquet/"
Path(parquet_dir).mkdir(parents=True, exist_ok=True)
builder = load_dataset_builder(
"rotten_tomatoes",
)
builder.download_and_prepare(parquet_dir, file_format="parquet")
```
### Expected behavior
Downloads the files and saves as parquet
### Environment info
Ubuntu,
Python 3.10
```
datasets 2.19.1
``` | null | https://api.github.com/repos/huggingface/datasets/issues/7001/timeline | null | null | purefall | 12,601,271 | MDQ6VXNlcjEyNjAxMjcx | https://avatars.githubusercontent.com/u/12601271?v=4 | https://api.github.com/users/purefall | https://github.com/purefall | https://api.github.com/users/purefall/followers | https://api.github.com/users/purefall/following{/other_user} | https://api.github.com/users/purefall/gists{/gist_id} | https://api.github.com/users/purefall/starred{/owner}{/repo} | https://api.github.com/users/purefall/subscriptions | https://api.github.com/users/purefall/orgs | https://api.github.com/users/purefall/repos | https://api.github.com/users/purefall/events{/privacy} | https://api.github.com/users/purefall/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7001/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"Ok it seems the solution is to use the directory string without the trailing \"/\" which in my case as: \r\n\r\n`parquet_dir = \"~/data/Parquet\" `\r\n\r\nStill i think this is a weird behavior... "
] | |
https://api.github.com/repos/huggingface/datasets/issues/7000 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7000/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7000/comments | https://api.github.com/repos/huggingface/datasets/issues/7000/events | https://github.com/huggingface/datasets/issues/7000 | 2,372,887,585 | I_kwDODunzps6Nb2Qh | 7,000 | IterableDataset: Unsupported ScalarType BFloat16 | [] | closed | false | null | [] | null | 3 | 1,719 | 1,719 | 1,719 | NONE | null | null | ### Describe the bug
`IterableDataset.from_generator` crashes when using BFloat16:
```
File "/usr/local/lib/python3.11/site-packages/datasets/utils/_dill.py", line 169, in _save_torchTensor
args = (obj.detach().cpu().numpy(),)
^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Got unsupported ScalarType BFloat16
```
### Steps to reproduce the bug
```python
import torch
from datasets import IterableDataset
def demo(x):
yield {"x": x}
x = torch.tensor([1.], dtype=torch.bfloat16)
dataset = IterableDataset.from_generator(
demo,
gen_kwargs=dict(x=x),
)
example = next(iter(dataset))
print(example)
```
### Expected behavior
Code sample should print:
```python
{'x': tensor([1.], dtype=torch.bfloat16)}
```
### Environment info
```
datasets==2.20.0
torch==2.2.2
``` | null | https://api.github.com/repos/huggingface/datasets/issues/7000/timeline | null | completed | ghost | 10,137 | MDQ6VXNlcjEwMTM3 | https://avatars.githubusercontent.com/u/10137?v=4 | https://api.github.com/users/ghost | https://github.com/ghost | https://api.github.com/users/ghost/followers | https://api.github.com/users/ghost/following{/other_user} | https://api.github.com/users/ghost/gists{/gist_id} | https://api.github.com/users/ghost/starred{/owner}{/repo} | https://api.github.com/users/ghost/subscriptions | https://api.github.com/users/ghost/orgs | https://api.github.com/users/ghost/repos | https://api.github.com/users/ghost/events{/privacy} | https://api.github.com/users/ghost/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/7000/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | lhoestq | 42,851,186 | MDQ6VXNlcjQyODUxMTg2 | https://avatars.githubusercontent.com/u/42851186?v=4 | https://api.github.com/users/lhoestq | https://github.com/lhoestq | https://api.github.com/users/lhoestq/followers | https://api.github.com/users/lhoestq/following{/other_user} | https://api.github.com/users/lhoestq/gists{/gist_id} | https://api.github.com/users/lhoestq/starred{/owner}{/repo} | https://api.github.com/users/lhoestq/subscriptions | https://api.github.com/users/lhoestq/orgs | https://api.github.com/users/lhoestq/repos | https://api.github.com/users/lhoestq/events{/privacy} | https://api.github.com/users/lhoestq/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"@lhoestq Thank you for merging #6607, but unfortunately the issue persists for `IterableDataset` :pensive: ",
"Hi ! I opened https://github.com/huggingface/datasets/pull/7002 to fix this bug",
"Amazing, thank you so much @lhoestq! :pray:"
] | ||
https://api.github.com/repos/huggingface/datasets/issues/6997 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6997/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6997/comments | https://api.github.com/repos/huggingface/datasets/issues/6997/events | https://github.com/huggingface/datasets/issues/6997 | 2,371,966,127 | I_kwDODunzps6NYVSv | 6,997 | CI is broken for tests using hf-internal-testing/librispeech_asr_dummy | [
{
"id": 4296013012,
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance",
"name": "maintenance",
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks"
}
] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 1,719 | 1,719 | 1,719 | MEMBER | null | null | CI is broken: https://github.com/huggingface/datasets/actions/runs/9657882317/job/26637998686?pr=6996
```
FAILED tests/test_inspect.py::test_get_dataset_config_names[hf-internal-testing/librispeech_asr_dummy-expected4] - AssertionError: assert ['clean'] == ['clean', 'other']
Right contains one more item: 'other'
Full diff:
[
'clean',
- 'other',
]
FAILED tests/test_inspect.py::test_get_dataset_default_config_name[hf-internal-testing/librispeech_asr_dummy-None] - AssertionError: assert 'clean' is None
```
Note that repository was recently converted to Parquet: https://huggingface.co/datasets/hf-internal-testing/librispeech_asr_dummy/commit/5be91486e11a2d616f4ec5db8d3fd248585ac07a | null | https://api.github.com/repos/huggingface/datasets/issues/6997/timeline | null | completed | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/6997/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [] | |||
https://api.github.com/repos/huggingface/datasets/issues/6995 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6995/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6995/comments | https://api.github.com/repos/huggingface/datasets/issues/6995/events | https://github.com/huggingface/datasets/issues/6995 | 2,370,713,475 | I_kwDODunzps6NTjeD | 6,995 | ImportError when importing datasets.load_dataset | [] | closed | false | null | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 9 | 1,719 | 1,731 | 1,719 | NONE | null | null | ### Describe the bug
I encountered an ImportError while trying to import `load_dataset` from the `datasets` module in Hugging Face. The error message indicates a problem with importing 'CommitInfo' from 'huggingface_hub'.
### Steps to reproduce the bug
1. pip install git+https://github.com/huggingface/datasets
2. from datasets import load_dataset
### Expected behavior
ImportError Traceback (most recent call last)
Cell In[7], [line 1](vscode-notebook-cell:?execution_count=7&line=1)
----> [1](vscode-notebook-cell:?execution_count=7&line=1) from datasets import load_dataset
[3](vscode-notebook-cell:?execution_count=7&line=3) train_set = load_dataset("mispeech/speechocean762", split="train")
[4](vscode-notebook-cell:?execution_count=7&line=4) test_set = load_dataset("mispeech/speechocean762", split="test")
File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py:[1](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:1)7
1 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
[2](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:2) #
[3](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:3) # Licensed under the Apache License, Version 2.0 (the "License");
(...)
[12](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:12) # See the License for the specific language governing permissions and
[13](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:13) # limitations under the License.
[15](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:15) __version__ = "2.20.1.dev0"
---> [17](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:17) from .arrow_dataset import Dataset
[18](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:18) from .arrow_reader import ReadInstruction
[19](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/__init__.py:19) from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File d:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py:63
[61](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:61) import pyarrow.compute as pc
[62](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:62) from fsspec.core import url_to_fs
---> [63](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:63) from huggingface_hub import (
[64](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:64) CommitInfo,
[65](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:65) CommitOperationAdd,
...
[70](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:70) )
[71](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:71) from huggingface_hub.hf_api import RepoFile
[72](file:///D:/Anaconda3/envs/CS224S/Lib/site-packages/datasets/arrow_dataset.py:72) from multiprocess import Pool
ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (d:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py)
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?580889ab-0f61-4f37-9214-eaa2b3807f85) or open in a [text editor](command:workbench.action.openLargeOutput?580889ab-0f61-4f37-9214-eaa2b3807f85). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Environment info
Leo@DESKTOP-9NHUAMI MSYS /d/Anaconda3/envs/CS224S/Lib/site-packages/huggingface_hub
$ datasets-cli env
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "D:\Anaconda3\envs\CS224S\Scripts\datasets-cli.exe\__main__.py", line 4, in <module>
File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\__init__.py", line 17, in <module>
from .arrow_dataset import Dataset
File "D:\Anaconda3\envs\CS224S\Lib\site-packages\datasets\arrow_dataset.py", line 63, in <module>
from huggingface_hub import (
ImportError: cannot import name 'CommitInfo' from 'huggingface_hub' (D:\Anaconda3\envs\CS224S\Lib\site-packages\huggingface_hub\__init__.py)
(CS224S) | null | https://api.github.com/repos/huggingface/datasets/issues/6995/timeline | null | completed | Leo-Lsc | 124,846,947 | U_kgDOB3EDYw | https://avatars.githubusercontent.com/u/124846947?v=4 | https://api.github.com/users/Leo-Lsc | https://github.com/Leo-Lsc | https://api.github.com/users/Leo-Lsc/followers | https://api.github.com/users/Leo-Lsc/following{/other_user} | https://api.github.com/users/Leo-Lsc/gists{/gist_id} | https://api.github.com/users/Leo-Lsc/starred{/owner}{/repo} | https://api.github.com/users/Leo-Lsc/subscriptions | https://api.github.com/users/Leo-Lsc/orgs | https://api.github.com/users/Leo-Lsc/repos | https://api.github.com/users/Leo-Lsc/events{/privacy} | https://api.github.com/users/Leo-Lsc/received_events | User | public | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | https://api.github.com/repos/huggingface/datasets/issues/6995/reactions | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | null | null | null | null | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | albertvillanova | 8,515,462 | MDQ6VXNlcjg1MTU0NjI= | https://avatars.githubusercontent.com/u/8515462?v=4 | https://api.github.com/users/albertvillanova | https://github.com/albertvillanova | https://api.github.com/users/albertvillanova/followers | https://api.github.com/users/albertvillanova/following{/other_user} | https://api.github.com/users/albertvillanova/gists{/gist_id} | https://api.github.com/users/albertvillanova/starred{/owner}{/repo} | https://api.github.com/users/albertvillanova/subscriptions | https://api.github.com/users/albertvillanova/orgs | https://api.github.com/users/albertvillanova/repos | https://api.github.com/users/albertvillanova/events{/privacy} | https://api.github.com/users/albertvillanova/received_events | User | public | false | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | false | [
"What is the version of your installed `huggingface-hub`:\r\n```python\r\nimport huggingface_hub\r\nprint(huggingface_hub.__version__)\r\n```\r\n\r\nIt seems you have a very old version of `huggingface-hub`, where `CommitInfo` was not still implemented. You need to update it:\r\n```\r\npip install -U huggingface-hu... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.