id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,566,788,225 | https://api.github.com/repos/huggingface/datasets/issues/7199 | https://github.com/huggingface/datasets/pull/7199 | 7,199 | Add with_rank to Dataset.from_generator | open | 0 | 2024-10-04T16:51:53 | 2024-10-04T16:51:53 | null | muthissar | [] | Adds `with_rank` to `Dataset.from_generator`. As for `Dataset.map` and `Dataset.filter`, this is useful when creating cache files using multi-GPU. | true |
2,566,064,849 | https://api.github.com/repos/huggingface/datasets/issues/7198 | https://github.com/huggingface/datasets/pull/7198 | 7,198 | Add repeat method to datasets | closed | 4 | 2024-10-04T10:45:16 | 2025-02-05T16:32:31 | 2025-02-05T16:32:31 | alex-hh | [] | Following up on discussion in #6623 and #7198 I thought this would be pretty useful for my case so had a go at implementing.
My main motivation is to be able to call iterable_dataset.repeat(None).take(samples_per_epoch) to safely avoid timeout issues in a distributed training setting. This would provide a straightfo... | true |
2,565,924,788 | https://api.github.com/repos/huggingface/datasets/issues/7197 | https://github.com/huggingface/datasets/issues/7197 | 7,197 | ConnectionError: Couldn't reach 'allenai/c4' on the Hub (ConnectionError)数据集下不下来,怎么回事 | open | 2 | 2024-10-04T09:33:25 | 2025-02-26T02:26:16 | null | Mrgengli | [] | ### Describe the bug
from datasets import load_dataset
print("11")
traindata = load_dataset('ptb_text_only', 'penn_treebank', split='train')
print("22")
valdata = load_dataset('ptb_text_only',
'penn_treebank',
split='validation')
### Steps to reproduce the b... | false |
2,564,218,566 | https://api.github.com/repos/huggingface/datasets/issues/7196 | https://github.com/huggingface/datasets/issues/7196 | 7,196 | concatenate_datasets does not preserve shuffling state | open | 1 | 2024-10-03T14:30:38 | 2025-03-18T10:56:47 | null | alex-hh | [] | ### Describe the bug
After concatenate datasets on an iterable dataset, the shuffling state is destroyed, similar to #7156
This means concatenation cant be used for resolving uneven numbers of samples across devices when using iterable datasets in a distributed setting as discussed in #6623
I also noticed th... | false |
2,564,070,809 | https://api.github.com/repos/huggingface/datasets/issues/7195 | https://github.com/huggingface/datasets/issues/7195 | 7,195 | Add support for 3D datasets | open | 3 | 2024-10-03T13:27:44 | 2024-10-04T09:23:36 | null | severo | [
"enhancement"
] | See https://huggingface.co/datasets/allenai/objaverse for example | false |
2,563,364,199 | https://api.github.com/repos/huggingface/datasets/issues/7194 | https://github.com/huggingface/datasets/issues/7194 | 7,194 | datasets.exceptions.DatasetNotFoundError for private dataset | closed | 2 | 2024-10-03T07:49:36 | 2024-10-03T10:09:28 | 2024-10-03T10:09:28 | kdutia | [] | ### Describe the bug
The following Python code tries to download a private dataset and fails with the error `datasets.exceptions.DatasetNotFoundError: Dataset 'ClimatePolicyRadar/all-document-text-data-weekly' doesn't exist on the Hub or cannot be accessed.`. Downloading a public dataset doesn't work.
``` py
fro... | false |
2,562,392,887 | https://api.github.com/repos/huggingface/datasets/issues/7193 | https://github.com/huggingface/datasets/issues/7193 | 7,193 | Support of num_workers (multiprocessing) in map for IterableDataset | open | 1 | 2024-10-02T18:34:04 | 2024-10-03T09:54:15 | null | getao | [
"enhancement"
] | ### Feature request
Currently, IterableDataset doesn't support setting num_worker in .map(), which results in slow processing here. Could we add support for it? As .map() can be run in the batch fashion (e.g., batch_size is default to 1000 in datasets), it seems to be doable for IterableDataset as the regular Dataset.... | false |
2,562,289,642 | https://api.github.com/repos/huggingface/datasets/issues/7192 | https://github.com/huggingface/datasets/issues/7192 | 7,192 | Add repeat() for iterable datasets | closed | 3 | 2024-10-02T17:48:13 | 2025-03-18T10:48:33 | 2025-03-18T10:48:32 | alex-hh | [
"enhancement"
] | ### Feature request
It would be useful to be able to straightforwardly repeat iterable datasets indefinitely, to provide complete control over starting and ending of iteration to the user.
An IterableDataset.repeat(n) function could do this automatically
### Motivation
This feature was discussed in this iss... | false |
2,562,206,949 | https://api.github.com/repos/huggingface/datasets/issues/7191 | https://github.com/huggingface/datasets/pull/7191 | 7,191 | Solution to issue: #7080 Modified load_dataset function, so that it prompts the user to select a dataset when subdatasets or splits (train, test) are available | closed | 1 | 2024-10-02T17:02:45 | 2024-11-10T08:48:21 | 2024-11-10T08:48:21 | negativenagesh | [] | # Feel free to give suggestions please..
### This PR is raised because of issue: https://github.com/huggingface/datasets/issues/7080

### This PR gives solution to https://github.com/huggingface/datasets/issues/7080
1. ... | true |
2,562,162,725 | https://api.github.com/repos/huggingface/datasets/issues/7190 | https://github.com/huggingface/datasets/issues/7190 | 7,190 | Datasets conflicts with fsspec 2024.9 | open | 1 | 2024-10-02T16:43:46 | 2024-10-10T07:33:18 | null | cw-igormorgado | [] | ### Describe the bug
Installing both in latest versions are not possible
`pip install "datasets==3.0.1" "fsspec==2024.9.0"`
But using older version of datasets is ok
`pip install "datasets==1.24.4" "fsspec==2024.9.0"`
### Steps to reproduce the bug
`pip install "datasets==3.0.1" "fsspec==2024.9.0"`
#... | false |
2,562,152,845 | https://api.github.com/repos/huggingface/datasets/issues/7189 | https://github.com/huggingface/datasets/issues/7189 | 7,189 | Audio preview in dataset viewer for audio array data without a path/filename | open | 0 | 2024-10-02T16:38:38 | 2024-10-02T17:01:40 | null | Lauler | [
"enhancement"
] | ### Feature request
Huggingface has quite a comprehensive set of guides for [audio datasets](https://huggingface.co/docs/datasets/en/audio_dataset). It seems, however, all these guides assume the audio array data to be decoded/inserted into a HF dataset always originates from individual files. The [Audio-dataclass](... | false |
2,560,712,689 | https://api.github.com/repos/huggingface/datasets/issues/7188 | https://github.com/huggingface/datasets/pull/7188 | 7,188 | Pin multiprocess<0.70.1 to align with dill<0.3.9 | closed | 1 | 2024-10-02T05:40:18 | 2024-10-02T06:08:25 | 2024-10-02T06:08:23 | albertvillanova | [] | Pin multiprocess<0.70.1 to align with dill<0.3.9.
Note that multiprocess-0.70.1 requires dill-0.3.9: https://github.com/uqfoundation/multiprocess/releases/tag/0.70.17
Fix #7186. | true |
2,560,501,308 | https://api.github.com/repos/huggingface/datasets/issues/7187 | https://github.com/huggingface/datasets/issues/7187 | 7,187 | shard_data_sources() got an unexpected keyword argument 'worker_id' | open | 0 | 2024-10-02T01:26:35 | 2024-10-02T01:26:35 | null | Qinghao-Hu | [] | ### Describe the bug
```
[rank0]: File "/home/qinghao/miniconda3/envs/doremi/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 238, in __iter__
[rank0]: for key_example in islice(self.generate_examples_fn(**gen_kwags), shard_example_idx_start, None):
[rank0]: File "/home/qinghao/miniconda3/en... | false |
2,560,323,917 | https://api.github.com/repos/huggingface/datasets/issues/7186 | https://github.com/huggingface/datasets/issues/7186 | 7,186 | pinning `dill<0.3.9` without pinning `multiprocess` | closed | 0 | 2024-10-01T22:29:32 | 2024-10-02T06:08:24 | 2024-10-02T06:08:24 | shubhbapna | [] | ### Describe the bug
The [latest `multiprocess` release](https://github.com/uqfoundation/multiprocess/releases/tag/0.70.17) requires `dill>=0.3.9` which causes issues when installing `datasets` without backtracking during package version resolution. Is it possible to add a pin for multiprocess so something like `multi... | false |
2,558,508,748 | https://api.github.com/repos/huggingface/datasets/issues/7185 | https://github.com/huggingface/datasets/issues/7185 | 7,185 | CI benchmarks are broken | closed | 1 | 2024-10-01T08:16:08 | 2024-10-09T16:07:48 | 2024-10-09T16:07:48 | albertvillanova | [
"maintenance"
] | Since Aug 30, 2024, CI benchmarks are broken: https://github.com/huggingface/datasets/actions/runs/11108421214/job/30861323975
```
{"level":"error","message":"Resource not accessible by integration","name":"HttpError","request":{"body":"{\"body\":\"<details>\\n<summary>Show benchmarks</summary>\\n\\nPyArrow==8.0.0\\n... | false |
2,556,855,150 | https://api.github.com/repos/huggingface/datasets/issues/7184 | https://github.com/huggingface/datasets/pull/7184 | 7,184 | Pin dill<0.3.9 to fix CI | closed | 1 | 2024-09-30T14:26:25 | 2024-09-30T14:38:59 | 2024-09-30T14:38:57 | albertvillanova | [] | Pin dill<0.3.9 to fix CI for deps-latest.
Note that dill-0.3.9 was released yesterday Sep 29, 2024:
- https://pypi.org/project/dill/0.3.9/
- https://github.com/uqfoundation/dill/releases/tag/0.3.9
Fix #7183. | true |
2,556,789,055 | https://api.github.com/repos/huggingface/datasets/issues/7183 | https://github.com/huggingface/datasets/issues/7183 | 7,183 | CI is broken for deps-latest | closed | 0 | 2024-09-30T14:02:07 | 2024-09-30T14:38:58 | 2024-09-30T14:38:58 | albertvillanova | [] | See: https://github.com/huggingface/datasets/actions/runs/11106149906/job/30853879890
```
=========================== short test summary info ============================
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_filter_caching_on_disk - AssertionError: Lists differ: [{'fi[44 chars] {'filename': '/... | false |
2,556,333,671 | https://api.github.com/repos/huggingface/datasets/issues/7182 | https://github.com/huggingface/datasets/pull/7182 | 7,182 | Support features in metadata configs | closed | 2 | 2024-09-30T11:14:53 | 2024-10-09T16:03:57 | 2024-10-09T16:03:54 | albertvillanova | [] | Support features in metadata configs, like:
```
configs:
- config_name: default
features:
- name: id
dtype: int64
- name: name
dtype: string
- name: score
dtype: float64
```
This will allow to avoid inference of data types.
Currently, we allow passing th... | true |
2,554,917,019 | https://api.github.com/repos/huggingface/datasets/issues/7181 | https://github.com/huggingface/datasets/pull/7181 | 7,181 | Fix datasets export to JSON | closed | 8 | 2024-09-29T12:45:20 | 2024-11-01T11:55:36 | 2024-11-01T11:55:36 | varadhbhatnagar | [] | null | true |
2,554,244,750 | https://api.github.com/repos/huggingface/datasets/issues/7180 | https://github.com/huggingface/datasets/issues/7180 | 7,180 | Memory leak when wrapping datasets into PyTorch Dataset without explicit deletion | closed | 1 | 2024-09-28T14:00:47 | 2024-09-30T12:07:56 | 2024-09-30T12:07:56 | iamwangyabin | [] | ### Describe the bug
I've encountered a memory leak when wrapping the HuggingFace dataset into a PyTorch Dataset. The RAM usage constantly increases during iteration if items are not explicitly deleted after use.
### Steps to reproduce the bug
Steps to reproduce:
Create a PyTorch Dataset wrapper f... | false |
2,552,387,980 | https://api.github.com/repos/huggingface/datasets/issues/7179 | https://github.com/huggingface/datasets/pull/7179 | 7,179 | Support Python 3.11 | closed | 1 | 2024-09-27T08:55:44 | 2024-10-08T16:21:06 | 2024-10-08T16:21:03 | albertvillanova | [] | Support Python 3.11.
Fix #7178. | true |
2,552,378,330 | https://api.github.com/repos/huggingface/datasets/issues/7178 | https://github.com/huggingface/datasets/issues/7178 | 7,178 | Support Python 3.11 | closed | 0 | 2024-09-27T08:50:47 | 2024-10-08T16:21:04 | 2024-10-08T16:21:04 | albertvillanova | [
"enhancement"
] | Support Python 3.11: https://peps.python.org/pep-0664/ | false |
2,552,371,082 | https://api.github.com/repos/huggingface/datasets/issues/7177 | https://github.com/huggingface/datasets/pull/7177 | 7,177 | Fix release instructions | closed | 1 | 2024-09-27T08:47:01 | 2024-09-27T08:57:35 | 2024-09-27T08:57:32 | albertvillanova | [] | Fix release instructions.
During last release, I had to make this additional update. | true |
2,551,025,564 | https://api.github.com/repos/huggingface/datasets/issues/7176 | https://github.com/huggingface/datasets/pull/7176 | 7,176 | fix grammar in fingerprint.py | open | 0 | 2024-09-26T16:13:42 | 2024-09-26T16:13:42 | null | jxmorris12 | [] | I see this error all the time and it was starting to get to me. | true |
2,550,957,337 | https://api.github.com/repos/huggingface/datasets/issues/7175 | https://github.com/huggingface/datasets/issues/7175 | 7,175 | [FSTimeoutError] load_dataset | closed | 7 | 2024-09-26T15:42:29 | 2025-02-01T09:09:35 | 2024-09-30T17:28:35 | cosmo3769 | [] | ### Describe the bug
When using `load_dataset`to load [HuggingFaceM4/VQAv2](https://huggingface.co/datasets/HuggingFaceM4/VQAv2), I am getting `FSTimeoutError`.
### Error
```
TimeoutError:
The above exception was the direct cause of the following exception:
FSTimeoutError Trac... | false |
2,549,892,315 | https://api.github.com/repos/huggingface/datasets/issues/7174 | https://github.com/huggingface/datasets/pull/7174 | 7,174 | Set dev version | closed | 1 | 2024-09-26T08:30:11 | 2024-09-26T08:32:39 | 2024-09-26T08:30:21 | albertvillanova | [] | null | true |
2,549,882,529 | https://api.github.com/repos/huggingface/datasets/issues/7173 | https://github.com/huggingface/datasets/pull/7173 | 7,173 | Release: 3.0.1 | closed | 1 | 2024-09-26T08:25:54 | 2024-09-26T08:28:29 | 2024-09-26T08:26:03 | albertvillanova | [] | null | true |
2,549,781,691 | https://api.github.com/repos/huggingface/datasets/issues/7172 | https://github.com/huggingface/datasets/pull/7172 | 7,172 | Add torchdata as a regular test dependency | closed | 1 | 2024-09-26T07:45:55 | 2024-09-26T08:12:12 | 2024-09-26T08:05:40 | albertvillanova | [] | Add `torchdata` as a regular test dependency.
Note that previously, `torchdata` was installed from their repo and current main branch (0.10.0.dev) requires Python>=3.9.
Also note they made a recent release: 0.8.0 on Jul 31, 2024.
Fix #7171. | true |
2,549,738,919 | https://api.github.com/repos/huggingface/datasets/issues/7171 | https://github.com/huggingface/datasets/issues/7171 | 7,171 | CI is broken: No solution found when resolving dependencies | closed | 0 | 2024-09-26T07:24:58 | 2024-09-26T08:05:41 | 2024-09-26T08:05:41 | albertvillanova | [
"bug"
] | See: https://github.com/huggingface/datasets/actions/runs/11046967444/job/30687294297
```
Run uv pip install --system -r additional-tests-requirements.txt --no-deps
× No solution found when resolving dependencies:
╰─▶ Because the current Python version (3.8.18) does not satisfy Python>=3.9
and torchdata=... | false |
2,546,944,016 | https://api.github.com/repos/huggingface/datasets/issues/7170 | https://github.com/huggingface/datasets/pull/7170 | 7,170 | Support JSON lines with missing columns | closed | 1 | 2024-09-25T05:08:15 | 2024-09-26T06:42:09 | 2024-09-26T06:42:07 | albertvillanova | [] | Support JSON lines with missing columns.
Fix #7169.
The implemented test raised:
```
datasets.table.CastError: Couldn't cast
age: int64
to
{'age': Value(dtype='int32', id=None), 'name': Value(dtype='string', id=None)}
because column names don't match
```
Related to:
- #7160
- #7162 | true |
2,546,894,076 | https://api.github.com/repos/huggingface/datasets/issues/7169 | https://github.com/huggingface/datasets/issues/7169 | 7,169 | JSON lines with missing columns raise CastError | closed | 0 | 2024-09-25T04:43:28 | 2024-09-26T06:42:08 | 2024-09-26T06:42:08 | albertvillanova | [
"bug"
] | JSON lines with missing columns raise CastError:
> CastError: Couldn't cast ... to ... because column names don't match
Related to:
- #7159
- #7161 | false |
2,546,710,631 | https://api.github.com/repos/huggingface/datasets/issues/7168 | https://github.com/huggingface/datasets/issues/7168 | 7,168 | sd1.5 diffusers controlnet training script gives new error | closed | 3 | 2024-09-25T01:42:49 | 2024-09-30T05:24:03 | 2024-09-30T05:24:02 | Night1099 | [] | ### Describe the bug
This will randomly pop up during training now
```
Traceback (most recent call last):
File "/workspace/diffusers/examples/controlnet/train_controlnet.py", line 1192, in <module>
main(args)
File "/workspace/diffusers/examples/controlnet/train_controlnet.py", line 1041, in main
... | false |
2,546,708,014 | https://api.github.com/repos/huggingface/datasets/issues/7167 | https://github.com/huggingface/datasets/issues/7167 | 7,167 | Error Mapping on sd3, sdxl and upcoming flux controlnet training scripts in diffusers | closed | 1 | 2024-09-25T01:39:51 | 2024-09-30T05:28:15 | 2024-09-30T05:28:04 | Night1099 | [] | ### Describe the bug
```
Map: 6%|██████ | 8000/138120 [19:27<5:16:36, 6.85 examples/s]
Traceback (most recent call last):
File "/workspace/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1416, in <mod... | false |
2,545,608,736 | https://api.github.com/repos/huggingface/datasets/issues/7166 | https://github.com/huggingface/datasets/pull/7166 | 7,166 | fix docstring code example for distributed shuffle | closed | 1 | 2024-09-24T14:39:54 | 2024-09-24T14:42:41 | 2024-09-24T14:40:14 | lhoestq | [] | close https://github.com/huggingface/datasets/issues/7163 | true |
2,544,972,541 | https://api.github.com/repos/huggingface/datasets/issues/7165 | https://github.com/huggingface/datasets/pull/7165 | 7,165 | fix increase_load_count | closed | 3 | 2024-09-24T10:14:40 | 2024-09-24T17:31:07 | 2024-09-24T13:48:00 | lhoestq | [] | it was failing since 3.0 and therefore not updating download counts on HF or in our dashboard | true |
2,544,757,297 | https://api.github.com/repos/huggingface/datasets/issues/7164 | https://github.com/huggingface/datasets/issues/7164 | 7,164 | fsspec.exceptions.FSTimeoutError when downloading dataset | closed | 7 | 2024-09-24T08:45:05 | 2025-07-28T14:58:49 | 2025-07-28T14:58:49 | timonmerk | [] | ### Describe the bug
I am trying to download the `librispeech_asr` `clean` dataset, which results in a `FSTimeoutError` exception after downloading around 61% of the data.
### Steps to reproduce the bug
```
import datasets
datasets.load_dataset("librispeech_asr", "clean")
```
The output is as follows:
> Dow... | false |
2,542,361,234 | https://api.github.com/repos/huggingface/datasets/issues/7163 | https://github.com/huggingface/datasets/issues/7163 | 7,163 | Set explicit seed in iterable dataset ddp shuffling example | closed | 1 | 2024-09-23T11:34:06 | 2024-09-24T14:40:15 | 2024-09-24T14:40:15 | alex-hh | [] | ### Describe the bug
In the examples section of the iterable dataset docs https://huggingface.co/docs/datasets/en/package_reference/main_classes#datasets.IterableDataset
the ddp example shuffles without seeding
```python
from datasets.distributed import split_dataset_by_node
ids = ds.to_iterable_dataset(num_sh... | false |
2,542,323,382 | https://api.github.com/repos/huggingface/datasets/issues/7162 | https://github.com/huggingface/datasets/pull/7162 | 7,162 | Support JSON lines with empty struct | closed | 1 | 2024-09-23T11:16:12 | 2024-09-23T11:30:08 | 2024-09-23T11:30:06 | albertvillanova | [] | Support JSON lines with empty struct.
Fix #7161.
Related to:
- #7160 | true |
2,541,971,931 | https://api.github.com/repos/huggingface/datasets/issues/7161 | https://github.com/huggingface/datasets/issues/7161 | 7,161 | JSON lines with empty struct raise ArrowTypeError | closed | 0 | 2024-09-23T08:48:56 | 2024-09-25T04:43:44 | 2024-09-23T11:30:07 | albertvillanova | [
"bug"
] | JSON lines with empty struct raise ArrowTypeError: struct fields don't match or are in the wrong order
See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5
> ArrowTypeError: struct fields don't match or are in the wrong order: Input fields: struct<> output fields: struct<pov_c... | false |
2,541,877,813 | https://api.github.com/repos/huggingface/datasets/issues/7160 | https://github.com/huggingface/datasets/pull/7160 | 7,160 | Support JSON lines with missing struct fields | closed | 1 | 2024-09-23T08:04:09 | 2024-09-23T11:09:19 | 2024-09-23T11:09:17 | albertvillanova | [] | Support JSON lines with missing struct fields.
Fix #7159.
The implemented test raised:
```
TypeError: Couldn't cast array of type
struct<age: int64>
to
{'age': Value(dtype='int32', id=None), 'name': Value(dtype='string', id=None)}
``` | true |
2,541,865,613 | https://api.github.com/repos/huggingface/datasets/issues/7159 | https://github.com/huggingface/datasets/issues/7159 | 7,159 | JSON lines with missing struct fields raise TypeError: Couldn't cast array | closed | 1 | 2024-09-23T07:57:58 | 2024-10-21T08:07:07 | 2024-09-23T11:09:18 | albertvillanova | [
"bug"
] | JSON lines with missing struct fields raise TypeError: Couldn't cast array of type.
See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5
One would expect that the struct missing fields are added with null values. | false |
2,541,494,765 | https://api.github.com/repos/huggingface/datasets/issues/7158 | https://github.com/huggingface/datasets/pull/7158 | 7,158 | google colab ex | closed | 0 | 2024-09-23T03:29:50 | 2024-12-20T16:41:07 | 2024-12-20T16:41:07 | docfhsp | [] | null | true |
2,540,354,890 | https://api.github.com/repos/huggingface/datasets/issues/7157 | https://github.com/huggingface/datasets/pull/7157 | 7,157 | Fix zero proba interleave datasets | closed | 1 | 2024-09-21T15:19:14 | 2024-09-24T14:33:54 | 2024-09-24T14:33:54 | lhoestq | [] | fix https://github.com/huggingface/datasets/issues/7147 | true |
2,539,360,617 | https://api.github.com/repos/huggingface/datasets/issues/7156 | https://github.com/huggingface/datasets/issues/7156 | 7,156 | interleave_datasets resets shuffle state | open | 1 | 2024-09-20T17:57:54 | 2025-03-18T10:56:25 | null | jonathanasdf | [] | ### Describe the bug
```
import datasets
import torch.utils.data
def gen(shards):
yield {"shards": shards}
def main():
dataset = datasets.IterableDataset.from_generator(
gen,
gen_kwargs={'shards': list(range(25))}
)
dataset = dataset.shuffle(buffer_size=1)
dataset... | false |
2,533,641,870 | https://api.github.com/repos/huggingface/datasets/issues/7155 | https://github.com/huggingface/datasets/issues/7155 | 7,155 | Dataset viewer not working! Failure due to more than 32 splits. | closed | 1 | 2024-09-18T12:43:21 | 2024-09-18T13:20:03 | 2024-09-18T13:20:03 | sleepingcat4 | [] | Hello guys,
I have a dataset and I didn't know I couldn't upload more than 32 splits. Now, my dataset viewer is not working. I don't have the dataset locally on my node anymore and recreating would take a week. And I have to publish the dataset coming Monday. I read about the practice, how I can resolve it and avoi... | false |
2,532,812,323 | https://api.github.com/repos/huggingface/datasets/issues/7154 | https://github.com/huggingface/datasets/pull/7154 | 7,154 | Support ndjson data files | closed | 2 | 2024-09-18T06:10:10 | 2024-09-19T11:25:17 | 2024-09-19T11:25:14 | albertvillanova | [] | Support `ndjson` (Newline Delimited JSON) data files.
Fix #7153. | true |
2,532,788,555 | https://api.github.com/repos/huggingface/datasets/issues/7153 | https://github.com/huggingface/datasets/issues/7153 | 7,153 | Support data files with .ndjson extension | closed | 0 | 2024-09-18T05:54:45 | 2024-09-19T11:25:15 | 2024-09-19T11:25:15 | albertvillanova | [
"enhancement"
] | ### Feature request
Support data files with `.ndjson` extension.
### Motivation
We already support data files with `.jsonl` extension.
### Your contribution
I am opening a PR. | false |
2,527,577,048 | https://api.github.com/repos/huggingface/datasets/issues/7151 | https://github.com/huggingface/datasets/pull/7151 | 7,151 | Align filename prefix splitting with WebDataset library | closed | 0 | 2024-09-16T06:07:39 | 2024-09-16T15:26:36 | 2024-09-16T15:26:34 | albertvillanova | [] | Align filename prefix splitting with WebDataset library.
This PR uses the same `base_plus_ext` function as the one used by the `webdataset` library.
Fix #7150.
Related to #7144. | true |
2,527,571,175 | https://api.github.com/repos/huggingface/datasets/issues/7150 | https://github.com/huggingface/datasets/issues/7150 | 7,150 | WebDataset loader splits keys differently than WebDataset library | closed | 0 | 2024-09-16T06:02:47 | 2024-09-16T15:26:35 | 2024-09-16T15:26:35 | albertvillanova | [
"bug"
] | As reported by @ragavsachdeva (see discussion here: https://github.com/huggingface/datasets/pull/7144#issuecomment-2348307792), our webdataset loader is not aligned with the `webdataset` library when splitting keys from filenames.
For example, we get a different key splitting for filename `/some/path/22.0/1.1.png`:
... | false |
2,524,497,448 | https://api.github.com/repos/huggingface/datasets/issues/7149 | https://github.com/huggingface/datasets/issues/7149 | 7,149 | Datasets Unknown Keyword Argument Error - task_templates | closed | 3 | 2024-09-13T10:30:57 | 2025-03-06T07:11:55 | 2024-09-13T14:10:48 | varungupta31 | [] | ### Describe the bug
Issue
```python
from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)
```
Gives error
```
TypeError: DatasetInfo.__init__() got an unexpected keyword argument 'task_templates'
```
A simple downgrade to lower `data... | false |
2,523,833,413 | https://api.github.com/repos/huggingface/datasets/issues/7148 | https://github.com/huggingface/datasets/issues/7148 | 7,148 | Bug: Error when downloading mteb/mtop_domain | closed | 4 | 2024-09-13T04:09:39 | 2024-09-14T15:11:35 | 2024-09-14T15:11:35 | ZiyiXia | [] | ### Describe the bug
When downloading the dataset "mteb/mtop_domain", ran into the following error:
```
Traceback (most recent call last):
File "/share/project/xzy/test/test_download.py", line 3, in <module>
data = load_dataset("mteb/mtop_domain", "en", trust_remote_code=True)
File "/opt/conda/lib/pytho... | false |
2,523,129,465 | https://api.github.com/repos/huggingface/datasets/issues/7147 | https://github.com/huggingface/datasets/issues/7147 | 7,147 | IterableDataset strange deadlock | closed | 6 | 2024-09-12T18:59:33 | 2024-09-23T09:32:27 | 2024-09-21T17:37:34 | jonathanasdf | [] | ### Describe the bug
```
import datasets
import torch.utils.data
num_shards = 1024
def gen(shards):
for shard in shards:
if shard < 25:
yield {"shard": shard}
def main():
dataset = datasets.IterableDataset.from_generator(
gen,
gen_kwargs={"shards": lis... | false |
2,519,820,162 | https://api.github.com/repos/huggingface/datasets/issues/7146 | https://github.com/huggingface/datasets/pull/7146 | 7,146 | Set dev version | closed | 1 | 2024-09-11T13:53:27 | 2024-09-12T04:34:08 | 2024-09-12T04:34:06 | albertvillanova | [] | null | true |
2,519,789,724 | https://api.github.com/repos/huggingface/datasets/issues/7145 | https://github.com/huggingface/datasets/pull/7145 | 7,145 | Release: 3.0.0 | closed | 1 | 2024-09-11T13:41:47 | 2024-09-11T13:48:42 | 2024-09-11T13:48:41 | albertvillanova | [] | null | true |
2,519,393,560 | https://api.github.com/repos/huggingface/datasets/issues/7144 | https://github.com/huggingface/datasets/pull/7144 | 7,144 | Fix key error in webdataset | closed | 8 | 2024-09-11T10:50:17 | 2025-01-15T10:32:43 | 2024-09-13T04:31:37 | ragavsachdeva | [] | I was running into
```
example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]}
KeyError: 'png'
```
The issue is that a filename may have multiple "." e.g. `22.05.png`. Changing `split` to `rsplit` fixes it.
Related https://github.com/huggingface/datasets/issues/68... | true |
2,512,327,211 | https://api.github.com/repos/huggingface/datasets/issues/7143 | https://github.com/huggingface/datasets/pull/7143 | 7,143 | Modify add_column() to optionally accept a FeatureType as param | closed | 6 | 2024-09-08T10:56:57 | 2024-09-17T06:01:23 | 2024-09-16T15:11:01 | varadhbhatnagar | [] | Fix #7142.
**Before (Add + Cast)**:
```
from datasets import load_dataset, Value
ds = load_dataset("rotten_tomatoes", split="test")
lst = [i for i in range(len(ds))]
ds = ds.add_column("new_col", lst)
# Assigns int64 to new_col by default
print(ds.features)
ds = ds.cast_column("new_col", Value(dtype="u... | true |
2,512,244,938 | https://api.github.com/repos/huggingface/datasets/issues/7142 | https://github.com/huggingface/datasets/issues/7142 | 7,142 | Specifying datatype when adding a column to a dataset. | closed | 1 | 2024-09-08T07:34:24 | 2024-09-17T03:46:32 | 2024-09-17T03:46:32 | varadhbhatnagar | [
"enhancement"
] | ### Feature request
There should be a way to specify the datatype of a column in `datasets.add_column()`.
### Motivation
To specify a custom datatype, we have to use `datasets.add_column()` followed by `datasets.cast_column()` which is slow for large datasets. Another workaround is to pass a `numpy.array()` of desi... | false |
2,510,797,653 | https://api.github.com/repos/huggingface/datasets/issues/7141 | https://github.com/huggingface/datasets/issues/7141 | 7,141 | Older datasets throwing safety errors with 2.21.0 | closed | 17 | 2024-09-06T16:26:30 | 2024-09-06T21:14:14 | 2024-09-06T19:09:29 | alvations | [] | ### Describe the bug
The dataset loading was throwing some safety errors for this popular dataset `wmt14`.
[in]:
```
import datasets
# train_data = datasets.load_dataset("wmt14", "de-en", split="train")
train_data = datasets.load_dataset("wmt14", "de-en", split="train")
val_data = datasets.load_dataset(... | false |
2,508,078,858 | https://api.github.com/repos/huggingface/datasets/issues/7139 | https://github.com/huggingface/datasets/issues/7139 | 7,139 | Use load_dataset to load imagenet-1K But find a empty dataset | open | 2 | 2024-09-05T15:12:22 | 2024-10-09T04:02:41 | null | fscdc | [] | ### Describe the bug
```python
def get_dataset(data_path, train_folder="train", val_folder="val"):
traindir = os.path.join(data_path, train_folder)
valdir = os.path.join(data_path, val_folder)
def transform_val_examples(examples):
transform = Compose([
Resize(256),
... | false |
2,507,738,308 | https://api.github.com/repos/huggingface/datasets/issues/7138 | https://github.com/huggingface/datasets/issues/7138 | 7,138 | Cache only changed columns? | open | 2 | 2024-09-05T12:56:47 | 2024-09-20T13:27:20 | null | Modexus | [
"enhancement"
] | ### Feature request
Cache only the actual changes to the dataset i.e. changed columns.
### Motivation
I realized that caching actually saves the complete dataset again.
This is especially problematic for image datasets if one wants to only change another column e.g. some metadata and then has to save 5 TB again.
#... | false |
2,506,851,048 | https://api.github.com/repos/huggingface/datasets/issues/7137 | https://github.com/huggingface/datasets/issues/7137 | 7,137 | [BUG] dataset_info sequence unexpected behavior in README.md YAML | closed | 3 | 2024-09-05T06:06:06 | 2025-07-07T09:20:29 | 2025-07-04T19:50:59 | ain-soph | [] | ### Describe the bug
When working on `dataset_info` yaml, I find my data column with format `list[dict[str, str]]` cannot be coded correctly.
My data looks like
```
{"answers":[{"text": "ADDRESS", "label": "abc"}]}
```
My `dataset_info` in README.md is:
```
dataset_info:
- config_name: default
feature... | false |
2,506,115,857 | https://api.github.com/repos/huggingface/datasets/issues/7136 | https://github.com/huggingface/datasets/pull/7136 | 7,136 | Do not consume unnecessary memory during sharding | open | 0 | 2024-09-04T19:26:06 | 2024-09-04T19:28:23 | null | janEbert | [] | When sharding `IterableDataset`s, a temporary list is created that is then indexed. There is no need to create a temporary list of a potentially very large step/world size, with standard `islice` functionality, so we avoid it.
```shell
pytest tests/test_distributed.py -k iterable
```
Runs successfully. | true |
2,503,318,328 | https://api.github.com/repos/huggingface/datasets/issues/7135 | https://github.com/huggingface/datasets/issues/7135 | 7,135 | Bug: Type Mismatch in Dataset Mapping | open | 3 | 2024-09-03T16:37:01 | 2024-09-05T14:09:05 | null | marko1616 | [] | # Issue: Type Mismatch in Dataset Mapping
## Description
There is an issue with the `map` function in the `datasets` library where the mapped output does not reflect the expected type change. After applying a mapping function to convert an integer label to a string, the resulting type remains an integer instead of ... | false |
2,499,484,041 | https://api.github.com/repos/huggingface/datasets/issues/7134 | https://github.com/huggingface/datasets/issues/7134 | 7,134 | Attempting to return a rank 3 grayscale image from dataset.map results in extreme slowdown | open | 0 | 2024-09-01T13:55:41 | 2024-09-02T10:34:53 | null | navidmafi | [] | ### Describe the bug
Background: Digital images are often represented as a (Height, Width, Channel) tensor. This is the same for huggingface datasets that contain images. These images are loaded in Pillow containers which offer, for example, the `.convert` method.
I can convert an image from a (H,W,3) shape to a... | false |
2,496,474,495 | https://api.github.com/repos/huggingface/datasets/issues/7133 | https://github.com/huggingface/datasets/pull/7133 | 7,133 | remove filecheck to enable symlinks | closed | 6 | 2024-08-30T07:36:56 | 2024-12-24T14:25:22 | 2024-12-24T14:25:22 | fschlatt | [] | Enables streaming from local symlinks #7083
@lhoestq | true |
2,494,510,464 | https://api.github.com/repos/huggingface/datasets/issues/7132 | https://github.com/huggingface/datasets/pull/7132 | 7,132 | Fix data file module inference | open | 3 | 2024-08-29T13:48:16 | 2024-09-02T19:52:13 | null | HennerM | [] | I saved a dataset with two splits to disk with `DatasetDict.save_to_disk`. The train is bigger and ended up in 10 shards, whereas the test split only resulted in 1 split.
Now when trying to load the dataset, an error is raised that not all splits have the same data format:
> ValueError: Couldn't infer the same da... | true |
2,491,942,650 | https://api.github.com/repos/huggingface/datasets/issues/7129 | https://github.com/huggingface/datasets/issues/7129 | 7,129 | Inconsistent output in documentation example: `num_classes` not displayed in `ClassLabel` output | closed | 0 | 2024-08-28T12:27:48 | 2024-12-06T11:32:02 | 2024-12-06T11:32:02 | sergiopaniego | [] | In the documentation for [ClassLabel](https://huggingface.co/docs/datasets/v2.21.0/en/package_reference/main_classes#datasets.ClassLabel), there is an example of usage with the following code:
````
from datasets import Features
features = Features({'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'])})
... | false |
2,490,274,775 | https://api.github.com/repos/huggingface/datasets/issues/7128 | https://github.com/huggingface/datasets/issues/7128 | 7,128 | Filter Large Dataset Entry by Entry | open | 4 | 2024-08-27T20:31:09 | 2024-10-07T23:37:44 | null | QiyaoWei | [
"enhancement"
] | ### Feature request
I am not sure if this is a new feature, but I wanted to post this problem here, and hear if others have ways of optimizing and speeding up this process.
Let's say I have a really large dataset that I cannot load into memory. At this point, I am only aware of `streaming=True` to load the dataset.... | false |
2,486,524,966 | https://api.github.com/repos/huggingface/datasets/issues/7127 | https://github.com/huggingface/datasets/issues/7127 | 7,127 | Caching shuffles by np.random.Generator results in unintiutive behavior | open | 2 | 2024-08-26T10:29:48 | 2025-07-28T11:00:00 | null | el-hult | [] | ### Describe the bug
Create a dataset. Save it to disk. Load from disk. Shuffle, usning a `np.random.Generator`. Iterate. Shuffle again. Iterate. The iterates are different since the supplied np.random.Generator has progressed between the shuffles.
Load dataset from disk again. Shuffle and Iterate. See same result ... | false |
2,485,939,495 | https://api.github.com/repos/huggingface/datasets/issues/7126 | https://github.com/huggingface/datasets/pull/7126 | 7,126 | Disable implicit token in CI | closed | 2 | 2024-08-26T05:29:46 | 2024-08-26T06:05:01 | 2024-08-26T05:59:15 | albertvillanova | [] | Disable implicit token in CI.
This PR allows running CI tests locally without implicitly using the local user HF token. For example, run locally the tests in:
- #7124 | true |
2,485,912,246 | https://api.github.com/repos/huggingface/datasets/issues/7125 | https://github.com/huggingface/datasets/pull/7125 | 7,125 | Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport | closed | 2 | 2024-08-26T05:09:35 | 2024-08-26T05:33:15 | 2024-08-26T05:27:09 | albertvillanova | [] | Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport. | true |
2,485,890,442 | https://api.github.com/repos/huggingface/datasets/issues/7124 | https://github.com/huggingface/datasets/pull/7124 | 7,124 | Test get_dataset_config_info with non-existing/gated/private dataset | closed | 2 | 2024-08-26T04:53:59 | 2024-08-26T06:15:33 | 2024-08-26T06:09:42 | albertvillanova | [] | Test get_dataset_config_info with non-existing/gated/private dataset.
Related to:
- #7109
See also:
- https://github.com/huggingface/dataset-viewer/pull/3037: https://github.com/huggingface/dataset-viewer/pull/3037/commits/bb1a7e00c53c242088597cab6572e4fd57797ecb | true |
2,484,003,937 | https://api.github.com/repos/huggingface/datasets/issues/7123 | https://github.com/huggingface/datasets/issues/7123 | 7,123 | Make dataset viewer more flexible in displaying metadata alongside images | open | 3 | 2024-08-23T22:56:01 | 2024-10-17T09:13:47 | null | egrace479 | [
"enhancement"
] | ### Feature request
To display images with their associated metadata in the dataset viewer, a `metadata.csv` file is required. In the case of a dataset with multiple subsets, this would require the CSVs to be contained in the same folder as the images since they all need to be named `metadata.csv`. The request is th... | false |
2,482,491,258 | https://api.github.com/repos/huggingface/datasets/issues/7122 | https://github.com/huggingface/datasets/issues/7122 | 7,122 | [interleave_dataset] sample batches from a single source at a time | open | 0 | 2024-08-23T07:21:15 | 2024-08-23T07:21:15 | null | memray | [
"enhancement"
] | ### Feature request
interleave_dataset and [RandomlyCyclingMultiSourcesExamplesIterable](https://github.com/huggingface/datasets/blob/3813ce846e52824b38e53895810682f0a496a2e3/src/datasets/iterable_dataset.py#L816) enable us to sample data examples from different sources. But can we also sample batches in a similar man... | false |
2,480,978,483 | https://api.github.com/repos/huggingface/datasets/issues/7121 | https://github.com/huggingface/datasets/pull/7121 | 7,121 | Fix typed examples iterable state dict | closed | 2 | 2024-08-22T14:45:03 | 2024-08-22T14:54:56 | 2024-08-22T14:49:06 | lhoestq | [] | fix https://github.com/huggingface/datasets/issues/7085 as noted by @VeryLazyBoy and reported by @AjayP13 | true |
2,480,674,237 | https://api.github.com/repos/huggingface/datasets/issues/7120 | https://github.com/huggingface/datasets/pull/7120 | 7,120 | don't mention the script if trust_remote_code=False | closed | 3 | 2024-08-22T12:32:32 | 2024-08-22T14:39:52 | 2024-08-22T14:33:52 | severo | [] | See https://huggingface.co/datasets/Omega02gdfdd/bioclip-demo-zero-shot-mistakes for example. The error is:
```
FileNotFoundError: Couldn't find a dataset script at /src/services/worker/Omega02gdfdd/bioclip-demo-zero-shot-mistakes/bioclip-demo-zero-shot-mistakes.py or any data file in the same directory. Couldn't f... | true |
2,477,766,493 | https://api.github.com/repos/huggingface/datasets/issues/7119 | https://github.com/huggingface/datasets/pull/7119 | 7,119 | Install transformers with numpy-2 CI | closed | 2 | 2024-08-21T11:14:59 | 2024-08-21T11:42:35 | 2024-08-21T11:36:50 | albertvillanova | [] | Install transformers with numpy-2 CI.
Note that transformers no longer pins numpy < 2 since transformers-4.43.0:
- https://github.com/huggingface/transformers/pull/32018
- https://github.com/huggingface/transformers/releases/tag/v4.43.0 | true |
2,477,676,893 | https://api.github.com/repos/huggingface/datasets/issues/7118 | https://github.com/huggingface/datasets/pull/7118 | 7,118 | Allow numpy-2.1 and test it without audio extra | closed | 2 | 2024-08-21T10:29:35 | 2024-08-21T11:05:03 | 2024-08-21T10:58:15 | albertvillanova | [] | Allow numpy-2.1 and test it without audio extra.
This PR reverts:
- #7114
Note that audio extra tests can be included again with numpy-2.1 once next numba-0.61.0 version is released. | true |
2,476,555,659 | https://api.github.com/repos/huggingface/datasets/issues/7117 | https://github.com/huggingface/datasets/issues/7117 | 7,117 | Audio dataset load everything in RAM and is very slow | open | 3 | 2024-08-20T21:18:12 | 2024-08-26T13:11:55 | null | Jourdelune | [] | Hello, I'm working with an audio dataset. I want to transcribe the audio that the dataset contain, and for that I use whisper. My issue is that the dataset load everything in the RAM when I map the dataset, obviously, when RAM usage is too high, the program crashes.
To fix this issue, I'm using writer_batch_size tha... | false |
2,475,522,721 | https://api.github.com/repos/huggingface/datasets/issues/7116 | https://github.com/huggingface/datasets/issues/7116 | 7,116 | datasets cannot handle nested json if features is given. | closed | 3 | 2024-08-20T12:27:49 | 2024-09-03T10:18:23 | 2024-09-03T10:18:07 | ljw20180420 | [] | ### Describe the bug
I have a json named temp.json.
```json
{"ref1": "ABC", "ref2": "DEF", "cuts":[{"cut1": 3, "cut2": 5}]}
```
I want to load it.
```python
ds = datasets.load_dataset('json', data_files="./temp.json", features=datasets.Features({
'ref1': datasets.Value('string'),
'ref2': datasets.Value... | false |
2,475,363,142 | https://api.github.com/repos/huggingface/datasets/issues/7115 | https://github.com/huggingface/datasets/issues/7115 | 7,115 | module 'pyarrow.lib' has no attribute 'ListViewType' | closed | 1 | 2024-08-20T11:05:44 | 2024-09-10T06:51:08 | 2024-09-10T06:51:08 | neurafusionai | [] | ### Describe the bug
Code:
`!pipuninstall -y pyarrow
!pip install --no-cache-dir pyarrow
!pip uninstall -y pyarrow
!pip install pyarrow --no-cache-dir
!pip install --upgrade datasets transformers pyarrow
!pip install pyarrow.parquet
! pip install pyarrow-core libparquet
!pip install pyarrow --no-cache-di... | false |
2,475,062,252 | https://api.github.com/repos/huggingface/datasets/issues/7114 | https://github.com/huggingface/datasets/pull/7114 | 7,114 | Temporarily pin numpy<2.1 to fix CI | closed | 2 | 2024-08-20T08:42:57 | 2024-08-20T09:09:27 | 2024-08-20T09:02:35 | albertvillanova | [] | Temporarily pin numpy<2.1 to fix CI.
Fix #7111. | true |
2,475,029,640 | https://api.github.com/repos/huggingface/datasets/issues/7113 | https://github.com/huggingface/datasets/issues/7113 | 7,113 | Stream dataset does not iterate if the batch size is larger than the dataset size (related to drop_last_batch) | closed | 1 | 2024-08-20T08:26:40 | 2024-08-26T04:24:11 | 2024-08-26T04:24:10 | memray | [] | ### Describe the bug
Hi there,
I use streaming and interleaving to combine multiple datasets saved in jsonl files. The size of dataset can vary (from 100ish to 100k-ish). I use dataset.map() and a big batch size to reduce the IO cost. It was working fine with datasets-2.16.1 but this problem shows up after I upgr... | false |
2,475,004,644 | https://api.github.com/repos/huggingface/datasets/issues/7112 | https://github.com/huggingface/datasets/issues/7112 | 7,112 | cudf-cu12 24.4.1, ibis-framework 8.0.0 requires pyarrow<15.0.0a0,>=14.0.1,pyarrow<16,>=2 and datasets 2.21.0 requires pyarrow>=15.0.0 | open | 2 | 2024-08-20T08:13:55 | 2024-09-20T15:30:03 | null | SoumyaMB10 | [] | ### Describe the bug
!pip install accelerate>=0.16.0 torchvision transformers>=4.25.1 datasets>=2.19.1 ftfy tensorboard Jinja2 peft==0.7.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
c... | false |
2,474,915,845 | https://api.github.com/repos/huggingface/datasets/issues/7111 | https://github.com/huggingface/datasets/issues/7111 | 7,111 | CI is broken for numpy-2: Failed to fetch wheel: llvmlite==0.34.0 | closed | 2 | 2024-08-20T07:27:28 | 2024-08-21T05:05:36 | 2024-08-20T09:02:36 | albertvillanova | [] | Ci is broken with error `Failed to fetch wheel: llvmlite==0.34.0`: https://github.com/huggingface/datasets/actions/runs/10466825281/job/28984414269
```
Run uv pip install --system "datasets[tests_numpy2] @ ."
Resolved 150 packages in 4.42s
error: Failed to prepare distributions
Caused by: Failed to fetch wheel: ... | false |
2,474,747,695 | https://api.github.com/repos/huggingface/datasets/issues/7110 | https://github.com/huggingface/datasets/pull/7110 | 7,110 | Fix ConnectionError for gated datasets and unauthenticated users | closed | 4 | 2024-08-20T05:26:54 | 2024-08-20T15:11:35 | 2024-08-20T09:14:35 | albertvillanova | [] | Fix `ConnectionError` for gated datasets and unauthenticated users. See:
- https://github.com/huggingface/dataset-viewer/issues/3025
Note that a recent change in the Hub returns dataset info for gated datasets and unauthenticated users, instead of raising a `GatedRepoError` as before. See:
- https://github.com/hug... | true |
2,473,367,848 | https://api.github.com/repos/huggingface/datasets/issues/7109 | https://github.com/huggingface/datasets/issues/7109 | 7,109 | ConnectionError for gated datasets and unauthenticated users | closed | 0 | 2024-08-19T13:27:45 | 2024-08-20T09:14:36 | 2024-08-20T09:14:35 | albertvillanova | [] | Since the Hub returns dataset info for gated datasets and unauthenticated users, there is dead code: https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/load.py#L1846-L1852
We should remove the dead code and properly handle this case: currently we are raising a `Connect... | false |
2,470,665,327 | https://api.github.com/repos/huggingface/datasets/issues/7108 | https://github.com/huggingface/datasets/issues/7108 | 7,108 | website broken: Create a new dataset repository, doesn't create a new repo in Firefox | closed | 4 | 2024-08-16T17:23:00 | 2024-08-19T13:21:12 | 2024-08-19T06:52:48 | neoneye | [] | ### Describe the bug
This issue is also reported here:
https://discuss.huggingface.co/t/create-a-new-dataset-repository-broken-page/102644
This page is broken.
https://huggingface.co/new-dataset
I fill in the form with my text, and click `Create Dataset`.
`
used to work till 2.20.0 but doesn't work in 2.21.0
In 2.20.0:
: see `Value.dtype`.
However, `LargeList` attribute (like `Sequence.feature`) expects a `FeatureType` instead.
With this renaming:
- we avoid confusion about the expected type and
-... | true |
2,468,207,039 | https://api.github.com/repos/huggingface/datasets/issues/7105 | https://github.com/huggingface/datasets/pull/7105 | 7,105 | Use `huggingface_hub` cache | closed | 7 | 2024-08-15T14:45:22 | 2024-09-12T04:36:08 | 2024-08-21T15:47:16 | lhoestq | [] | - use `hf_hub_download()` from `huggingface_hub` for HF files
- `datasets` cache_dir is still used for:
- caching datasets as Arrow files (that back `Dataset` objects)
- extracted archives, uncompressed files
- files downloaded via http (datasets with scripts)
- I removed code that were made for http files (... | true |
2,467,788,212 | https://api.github.com/repos/huggingface/datasets/issues/7104 | https://github.com/huggingface/datasets/pull/7104 | 7,104 | remove more script docs | closed | 2 | 2024-08-15T10:13:26 | 2024-08-15T10:24:13 | 2024-08-15T10:18:25 | lhoestq | [] | null | true |
2,467,664,581 | https://api.github.com/repos/huggingface/datasets/issues/7103 | https://github.com/huggingface/datasets/pull/7103 | 7,103 | Fix args of feature docstrings | closed | 2 | 2024-08-15T08:46:08 | 2024-08-16T09:18:29 | 2024-08-15T10:33:30 | albertvillanova | [] | Fix Args section of feature docstrings.
Currently, some args do not appear in the docs because they are not properly parsed due to the lack of their type (between parentheses). | true |
2,466,893,106 | https://api.github.com/repos/huggingface/datasets/issues/7102 | https://github.com/huggingface/datasets/issues/7102 | 7,102 | Slow iteration speeds when using IterableDataset.shuffle with load_dataset(data_files=..., streaming=True) | open | 2 | 2024-08-14T21:44:44 | 2024-08-15T16:17:31 | null | lajd | [] | ### Describe the bug
When I load a dataset from a number of arrow files, as in:
```
random_dataset = load_dataset(
"arrow",
data_files={split: shard_filepaths},
streaming=True,
split=split,
)
```
I'm able to get fast iteration speeds when iterating over the dataset without shuffling.
... | false |
2,466,510,783 | https://api.github.com/repos/huggingface/datasets/issues/7101 | https://github.com/huggingface/datasets/issues/7101 | 7,101 | `load_dataset` from Hub with `name` to specify `config` using incorrect builder type when multiple data formats are present | open | 1 | 2024-08-14T18:12:25 | 2024-08-18T10:33:38 | null | hlky | [] | Following [documentation](https://huggingface.co/docs/datasets/repository_structure#define-your-splits-and-subsets-in-yaml) I had defined different configs for [`Dataception`](https://huggingface.co/datasets/bigdata-pw/Dataception), a dataset of datasets:
```yaml
configs:
- config_name: dataception
data_files:
... | false |
2,465,529,414 | https://api.github.com/repos/huggingface/datasets/issues/7100 | https://github.com/huggingface/datasets/issues/7100 | 7,100 | IterableDataset: cannot resolve features from list of numpy arrays | open | 1 | 2024-08-14T11:01:51 | 2024-10-03T05:47:23 | null | VeryLazyBoy | [] | ### Describe the bug
when resolve features of `IterableDataset`, got `pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values` error.
```
Traceback (most recent call last):
File "test.py", line 6
iter_ds = iter_ds._resolve_features()
File "lib/python3.10/site-packages/datasets/iterable_dat... | false |
2,465,221,827 | https://api.github.com/repos/huggingface/datasets/issues/7099 | https://github.com/huggingface/datasets/pull/7099 | 7,099 | Set dev version | closed | 2 | 2024-08-14T08:31:17 | 2024-08-14T08:45:17 | 2024-08-14T08:39:25 | albertvillanova | [] | null | true |
2,465,016,562 | https://api.github.com/repos/huggingface/datasets/issues/7098 | https://github.com/huggingface/datasets/pull/7098 | 7,098 | Release: 2.21.0 | closed | 1 | 2024-08-14T06:35:13 | 2024-08-14T06:41:07 | 2024-08-14T06:41:06 | albertvillanova | [] | null | true |
2,458,455,489 | https://api.github.com/repos/huggingface/datasets/issues/7097 | https://github.com/huggingface/datasets/issues/7097 | 7,097 | Some of DownloadConfig's properties are always being overridden in load.py | open | 0 | 2024-08-09T18:26:37 | 2024-08-09T18:26:37 | null | ductai199x | [] | ### Describe the bug
The `extract_compressed_file` and `force_extract` properties of DownloadConfig are always being set to True in the function `dataset_module_factory` in the `load.py` file. This behavior is very annoying because data extracted will just be ignored the next time the dataset is loaded.
See this im... | false |
2,456,929,173 | https://api.github.com/repos/huggingface/datasets/issues/7096 | https://github.com/huggingface/datasets/pull/7096 | 7,096 | Automatically create `cache_dir` from `cache_file_name` | closed | 3 | 2024-08-09T01:34:06 | 2024-08-15T17:25:26 | 2024-08-15T10:13:22 | ringohoffman | [] | You get a pretty unhelpful error message when specifying a `cache_file_name` in a directory that doesn't exist, e.g. `cache_file_name="./cache/data.map"`
```python
import datasets
cache_file_name="./cache/train.map"
dataset = datasets.load_dataset("ylecun/mnist")
dataset["train"].map(lambda x: x, cache_file_na... | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.