id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,861,259,055
https://api.github.com/repos/huggingface/datasets/issues/6166
https://github.com/huggingface/datasets/pull/6166
6,166
Document BUILDER_CONFIG_CLASS
closed
3
2023-08-22T11:27:41
2023-08-23T14:01:25
2023-08-23T13:52:36
lhoestq
[]
Related to https://github.com/huggingface/datasets/issues/6130
true
1,861,124,284
https://api.github.com/repos/huggingface/datasets/issues/6165
https://github.com/huggingface/datasets/pull/6165
6,165
Fix multiprocessing with spawn in iterable datasets
closed
5
2023-08-22T10:07:23
2023-08-29T13:27:14
2023-08-29T13:18:11
bruno-hays
[]
The "Spawn" method is preferred when multiprocessing on macOS or Windows systems, instead of the "Fork" method on linux systems. This causes some methods of Iterable Datasets to break when using a dataloader with more than 0 workers. I fixed the issue by replacing lambda and local methods which are not pickle-abl...
true
1,859,560,007
https://api.github.com/repos/huggingface/datasets/issues/6164
https://github.com/huggingface/datasets/pull/6164
6,164
Fix: Missing a MetadataConfigs init when the repo has a `datasets_info.json` but no README
closed
3
2023-08-21T14:57:54
2023-08-21T16:27:05
2023-08-21T16:18:26
clefourrier
[]
When I try to push to an arrow repo (can provide the link on Slack), it uploads the files but fails to update the metadata, with ``` File "app.py", line 123, in add_new_eval eval_results[level].push_to_hub(my_repo, token=TOKEN, split=SPLIT) File "blabla_my_env_path/lib/python3.10/site-packages/datasets/arro...
true
1,857,682,241
https://api.github.com/repos/huggingface/datasets/issues/6163
https://github.com/huggingface/datasets/issues/6163
6,163
Error type: ArrowInvalid Details: Failed to parse string: '[254,254]' as a scalar of type int32
open
2
2023-08-19T11:34:40
2025-07-22T12:04:46
null
shishirCTC
[]
### Describe the bug I am getting the following error while I am trying to upload the CSV sheet to train a model. My CSV sheet content is exactly same as shown in the example CSV file in the Auto Train page. Attaching screenshot of error for reference. I have also tried converting the index of the answer that are inte...
false
1,856,198,342
https://api.github.com/repos/huggingface/datasets/issues/6162
https://github.com/huggingface/datasets/issues/6162
6,162
load_dataset('json',...) from togethercomputer/RedPajama-Data-1T errors when jsonl rows contains different data fields
open
4
2023-08-18T07:19:39
2023-08-18T17:00:35
null
rbrugaro
[]
### Describe the bug When loading some jsonl from redpajama-data-1T github source [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) fails due to one row of the file containing an extra field called **symlink_target: string>**. When deleting that line the loading...
false
1,855,794,354
https://api.github.com/repos/huggingface/datasets/issues/6161
https://github.com/huggingface/datasets/pull/6161
6,161
Fix protocol prefix for Beam
closed
5
2023-08-17T22:40:37
2024-03-18T17:01:21
2024-03-18T17:01:21
mariosasko
[]
Fix #6147
true
1,855,760,543
https://api.github.com/repos/huggingface/datasets/issues/6160
https://github.com/huggingface/datasets/pull/6160
6,160
Fix Parquet loading with `columns`
closed
4
2023-08-17T21:58:24
2023-08-17T22:44:59
2023-08-17T22:36:04
mariosasko
[]
Fix #6149
true
1,855,691,512
https://api.github.com/repos/huggingface/datasets/issues/6159
https://github.com/huggingface/datasets/issues/6159
6,159
Add `BoundingBox` feature
open
1
2023-08-17T20:49:51
2024-11-18T17:58:43
null
mariosasko
[ "enhancement" ]
... to make working with object detection datasets easier. Currently, `Sequence(int_or_float, length=4)` can be used to represent this feature optimally (in the storage backend), so I only see this feature being useful if we make it work with the viewer. Also, bounding boxes usually come in 4 different formats (explain...
false
1,855,374,220
https://api.github.com/repos/huggingface/datasets/issues/6158
https://github.com/huggingface/datasets/pull/6158
6,158
[docs] Complete `to_iterable_dataset`
closed
2
2023-08-17T17:02:11
2023-08-17T19:24:20
2023-08-17T19:13:15
stevhliu
[]
Finishes the `to_iterable_dataset` documentation by adding it to the relevant sections in the tutorial and guide.
true
1,855,265,663
https://api.github.com/repos/huggingface/datasets/issues/6157
https://github.com/huggingface/datasets/issues/6157
6,157
DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding'
closed
13
2023-08-17T15:48:11
2023-09-27T17:36:14
2023-09-27T17:36:14
aihao2000
[]
### Describe the bug When I was in load_dataset, it said "DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding'". The second time I ran it, there was no error and the dataset object worked ```python --------------------------------------------------------------------------- TypeErr...
false
1,854,768,618
https://api.github.com/repos/huggingface/datasets/issues/6156
https://github.com/huggingface/datasets/issues/6156
6,156
Why not use self._epoch as seed to shuffle in distributed training with IterableDataset
closed
3
2023-08-17T10:58:20
2023-08-17T14:33:15
2023-08-17T14:33:14
npuichigo
[]
### Describe the bug Currently, distributed training with `IterableDataset` needs to pass fixed seed to shuffle to keep each node use the same seed to avoid overlapping. https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1174-L1177 My question ...
false
1,854,661,682
https://api.github.com/repos/huggingface/datasets/issues/6155
https://github.com/huggingface/datasets/pull/6155
6,155
Raise FileNotFoundError when passing data_files that don't exist
closed
5
2023-08-17T09:49:48
2023-08-18T13:45:58
2023-08-18T13:35:13
lhoestq
[]
e.g. when running `load_dataset("parquet", data_files="doesnt_exist.parquet")`
true
1,854,595,943
https://api.github.com/repos/huggingface/datasets/issues/6154
https://github.com/huggingface/datasets/pull/6154
6,154
Use yaml instead of get data patterns when possible
closed
6
2023-08-17T09:17:05
2023-08-17T20:46:25
2023-08-17T20:37:19
lhoestq
[]
This would make the data files resolution faster: no need to list all the data files to infer the dataset builder to use. fix https://github.com/huggingface/datasets/issues/6140
true
1,852,494,646
https://api.github.com/repos/huggingface/datasets/issues/6152
https://github.com/huggingface/datasets/issues/6152
6,152
FolderBase Dataset automatically resolves under current directory when data_dir is not specified
closed
19
2023-08-16T04:38:09
2025-06-18T14:18:42
2025-06-18T14:18:42
npuichigo
[ "good first issue" ]
### Describe the bug FolderBase Dataset automatically resolves under current directory when data_dir is not specified. For example: ``` load_dataset("audiofolder") ``` takes long time to resolve and collect data_files from current directory. But I think it should reach out to this line for error handling https:...
false
1,851,497,818
https://api.github.com/repos/huggingface/datasets/issues/6151
https://github.com/huggingface/datasets/issues/6151
6,151
Faster sorting for single key items
closed
2
2023-08-15T14:02:31
2023-08-21T14:38:26
2023-08-21T14:38:25
jackapbutler
[ "enhancement" ]
### Feature request A faster way to sort a dataset which contains a large number of rows. ### Motivation The current sorting implementations took significantly longer than expected when I was running on a dataset trying to sort by timestamps. **Code snippet:** ```python ds = datasets.load_dataset( "json"...
false
1,850,740,456
https://api.github.com/repos/huggingface/datasets/issues/6150
https://github.com/huggingface/datasets/issues/6150
6,150
Allow dataset implement .take
open
4
2023-08-15T00:17:51
2023-08-17T13:49:37
null
brando90
[ "enhancement" ]
### Feature request I want to do: ``` dataset.take(512) ``` but it only works with streaming = True ### Motivation uniform interface to data sets. Really surprising the above only works with streaming = True. ### Your contribution Should be trivial to copy paste the IterableDataset .take to use the local pa...
false
1,850,700,624
https://api.github.com/repos/huggingface/datasets/issues/6149
https://github.com/huggingface/datasets/issues/6149
6,149
Dataset.from_parquet cannot load subset of columns
closed
1
2023-08-14T23:28:22
2023-08-17T22:36:05
2023-08-17T22:36:05
dwyatte
[]
### Describe the bug When using `Dataset.from_parquet(path_or_paths, columns=[...])` and a subset of columns, loading fails with a variant of the following ``` ValueError: Couldn't cast a: int64 -- schema metadata -- pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 273 to {'a': V...
false
1,849,524,683
https://api.github.com/repos/huggingface/datasets/issues/6148
https://github.com/huggingface/datasets/pull/6148
6,148
Ignore parallel warning in map_nested
closed
3
2023-08-14T10:43:41
2023-08-17T08:54:06
2023-08-17T08:43:58
lhoestq
[]
This warning message was shown every time you pass num_proc to `load_dataset` because of `map_nested` ``` parallel_map is experimental and might be subject to breaking changes in the future ``` This PR removes it for `map_nested`. If someone uses another parallel backend they're already warned when `parallel_ba...
true
1,848,914,830
https://api.github.com/repos/huggingface/datasets/issues/6147
https://github.com/huggingface/datasets/issues/6147
6,147
ValueError when running BeamBasedBuilder with GCS path in cache_dir
closed
2
2023-08-14T03:11:34
2024-03-18T16:59:15
2024-03-18T16:59:14
ktrk115
[]
### Describe the bug When running the BeamBasedBuilder with a GCS path specified in the cache_dir, the following ValueError occurs: ``` ValueError: Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path spec...
false
1,848,417,366
https://api.github.com/repos/huggingface/datasets/issues/6146
https://github.com/huggingface/datasets/issues/6146
6,146
DatasetGenerationError when load glue benchmark datasets from `load_dataset`
closed
4
2023-08-13T05:17:56
2023-08-26T22:09:09
2023-08-26T22:09:09
yusx-swapp
[]
### Describe the bug Package version: datasets-2.14.4 When I run the codes: ``` from datasets import load_dataset dataset = load_dataset("glue", "ax") ``` I got the following errors: --------------------------------------------------------------------------- SchemaInferenceError ...
false
1,852,630,074
https://api.github.com/repos/huggingface/datasets/issues/6153
https://github.com/huggingface/datasets/issues/6153
6,153
custom load dataset to hub
closed
5
2023-08-13T04:42:22
2023-11-21T11:50:28
2023-10-08T17:04:16
andysingal
[]
### System Info kaggle notebook i transformed dataset: ``` dataset = load_dataset("Dahoas/first-instruct-human-assistant-prompt") ``` to formatted_dataset: ``` Dataset({ features: ['message_tree_id', 'message_tree_text'], num_rows: 33143 }) ``` but would like to know how to upload to hub ### ...
false
1,847,811,310
https://api.github.com/repos/huggingface/datasets/issues/6145
https://github.com/huggingface/datasets/pull/6145
6,145
Export to_iterable_dataset to document
closed
2
2023-08-12T07:00:14
2023-08-15T17:04:01
2023-08-15T16:55:24
npuichigo
[]
Fix the export of a missing method of `Dataset`
true
1,847,296,711
https://api.github.com/repos/huggingface/datasets/issues/6144
https://github.com/huggingface/datasets/issues/6144
6,144
NIH exporter file not found
open
6
2023-08-11T19:05:25
2023-08-14T23:28:38
null
brando90
[]
### Describe the bug can't use or download the nih exporter pile data. ``` 15 experiment_compute_diveristy_coeff_single_dataset_then_combined_datasets_with_domain_weights() 16 File "/lfs/ampere1/0/brando9/beyond-scale-language-data-diversity/src/diversity/div_coeff.py", line 474, in experiment_compute_diveri...
false
1,846,205,216
https://api.github.com/repos/huggingface/datasets/issues/6142
https://github.com/huggingface/datasets/issues/6142
6,142
the-stack-dedup fails to generate
closed
4
2023-08-11T05:10:49
2023-08-17T09:26:13
2023-08-17T09:26:13
michaelroyzen
[]
### Describe the bug I'm getting an error generating the-stack-dedup with datasets 2.13.1, and with 2.14.4 nothing happens. ### Steps to reproduce the bug My code: ``` import os import datasets as ds MY_CACHE_DIR = "/home/ubuntu/the-stack-dedup-local" MY_TOKEN="my-token" the_stack_ds = ds.load_dataset("...
false
1,846,117,729
https://api.github.com/repos/huggingface/datasets/issues/6141
https://github.com/huggingface/datasets/issues/6141
6,141
TypeError: ClientSession._request() got an unexpected keyword argument 'https'
closed
1
2023-08-11T02:40:32
2023-08-30T13:51:33
2023-08-30T13:51:33
q935970314
[]
### Describe the bug Hello, when I ran the [code snippet](https://huggingface.co/docs/datasets/v2.14.4/en/loading#json) on the document, I encountered the following problem: ``` Python 3.10.9 (main, Mar 1 2023, 18:23:06) [GCC 11.2.0] on linux Type "help", "copyright", "credits" or "license" for more informatio...
false
1,845,384,712
https://api.github.com/repos/huggingface/datasets/issues/6140
https://github.com/huggingface/datasets/issues/6140
6,140
Misalignment between file format specified in configs metadata YAML and the inferred builder
closed
0
2023-08-10T15:07:34
2023-08-17T20:37:20
2023-08-17T20:37:20
albertvillanova
[ "bug" ]
There is a misalignment between the format of the `data_files` specified in the configs metadata YAML (CSV): ```yaml configs: - config_name: default data_files: - split: train path: data.csv ``` and the inferred builder (JSON). Note there are multiple JSON files in the repo, but they do not...
false
1,844,991,583
https://api.github.com/repos/huggingface/datasets/issues/6139
https://github.com/huggingface/datasets/issues/6139
6,139
Offline dataset viewer
closed
7
2023-08-10T11:30:00
2024-09-24T18:36:35
2023-09-29T13:10:22
yuvalkirstain
[ "enhancement", "dataset-viewer" ]
### Feature request The dataset viewer feature is very nice. It enables to the user to easily view the dataset. However, when working for private companies we cannot always upload the dataset to the hub. Is there a way to create dataset viewer offline? I.e. to run a code that will open some kind of html or something t...
false
1,844,952,496
https://api.github.com/repos/huggingface/datasets/issues/6138
https://github.com/huggingface/datasets/pull/6138
6,138
Ignore CI lint rule violation in Pickler.memoize
closed
3
2023-08-10T11:03:15
2023-08-10T11:31:45
2023-08-10T11:22:56
albertvillanova
[]
This PR ignores the violation of the lint rule E721 in `Pickler.memoize`. The lint rule violation was introduced in this PR: - #3182 @lhoestq is there a reason you did not use `isinstance` instead? As a hotfix, we just ignore the violation of the lint rule. Fix #6136.
true
1,844,952,312
https://api.github.com/repos/huggingface/datasets/issues/6137
https://github.com/huggingface/datasets/issues/6137
6,137
(`from_spark()`) Unable to connect HDFS in pyspark YARN setting
open
0
2023-08-10T11:03:08
2023-08-10T11:03:08
null
kyoungrok0517
[]
### Describe the bug related issue: https://github.com/apache/arrow/issues/37057#issue-1841013613 --- Hello. I'm trying to interact with HDFS storage from a driver and workers of pyspark YARN cluster. Precisely I'm using **huggingface's `datasets`** ([link](https://github.com/huggingface/datasets)) library tha...
false
1,844,887,866
https://api.github.com/repos/huggingface/datasets/issues/6136
https://github.com/huggingface/datasets/issues/6136
6,136
CI check_code_quality error: E721 Do not compare types, use `isinstance()`
closed
0
2023-08-10T10:19:50
2023-08-10T11:22:58
2023-08-10T11:22:58
albertvillanova
[ "maintenance" ]
After latest release of `ruff` (https://pypi.org/project/ruff/0.0.284/), we get the following CI error: ``` src/datasets/utils/py_utils.py:689:12: E721 Do not compare types, use `isinstance()` ```
false
1,844,870,943
https://api.github.com/repos/huggingface/datasets/issues/6135
https://github.com/huggingface/datasets/pull/6135
6,135
Remove unused allowed_extensions param
closed
4
2023-08-10T10:09:54
2023-08-10T12:08:38
2023-08-10T12:00:02
albertvillanova
[]
This PR removes unused `allowed_extensions` parameter from `create_builder_configs_from_metadata_configs`.
true
1,844,535,142
https://api.github.com/repos/huggingface/datasets/issues/6134
https://github.com/huggingface/datasets/issues/6134
6,134
`datasets` cannot be installed alongside `apache-beam`
closed
1
2023-08-10T06:54:32
2023-09-01T03:19:49
2023-08-10T15:22:10
boyleconnor
[]
### Describe the bug If one installs `apache-beam` alongside `datasets` (which is required for the [wikipedia](https://huggingface.co/datasets/wikipedia#dataset-summary) dataset) in certain environments (such as a Google Colab notebook), they appear to install successfully, however, actually trying to do something s...
false
1,844,511,519
https://api.github.com/repos/huggingface/datasets/issues/6133
https://github.com/huggingface/datasets/issues/6133
6,133
Dataset is slower after calling `to_iterable_dataset`
open
2
2023-08-10T06:36:23
2023-08-16T09:18:54
null
npuichigo
[]
### Describe the bug Can anyone explain why looping over a dataset becomes slower after calling `to_iterable_dataset` to convert to `IterableDataset` ### Steps to reproduce the bug Any dataset after converting to `IterableDataset` ### Expected behavior Maybe it should be faster on big dataset? I only test on small...
false
1,843,491,020
https://api.github.com/repos/huggingface/datasets/issues/6132
https://github.com/huggingface/datasets/issues/6132
6,132
to_iterable_dataset is missing in document
closed
1
2023-08-09T15:15:03
2023-08-16T04:43:36
2023-08-16T04:43:29
npuichigo
[]
### Describe the bug to_iterable_dataset is missing in document ### Steps to reproduce the bug to_iterable_dataset is missing in document ### Expected behavior document enhancement ### Environment info unrelated
false
1,843,158,846
https://api.github.com/repos/huggingface/datasets/issues/6130
https://github.com/huggingface/datasets/issues/6130
6,130
default config name doesn't work when config kwargs are specified.
closed
15
2023-08-09T12:43:15
2023-11-22T11:50:49
2023-11-22T11:50:48
npuichigo
[]
### Describe the bug https://github.com/huggingface/datasets/blob/12cfc1196e62847e2e8239fbd727a02cbc86ddec/src/datasets/builder.py#L518-L522 If `config_name` is `None`, `DEFAULT_CONFIG_NAME` should be select. But once users pass `config_kwargs` to their customized `BuilderConfig`, the logic is ignored, and dataset ...
false
1,841,563,517
https://api.github.com/repos/huggingface/datasets/issues/6129
https://github.com/huggingface/datasets/pull/6129
6,129
Release 2.14.4
closed
5
2023-08-08T15:43:56
2023-08-08T16:08:22
2023-08-08T15:49:06
albertvillanova
[]
null
true
1,841,545,493
https://api.github.com/repos/huggingface/datasets/issues/6128
https://github.com/huggingface/datasets/issues/6128
6,128
IndexError: Invalid key: 88 is out of bounds for size 0
closed
5
2023-08-08T15:32:08
2023-12-26T07:51:57
2023-08-11T13:35:09
TomasAndersonFang
[]
### Describe the bug This bug generates when I use torch.compile(model) in my code, which seems to raise an error in datasets lib. ### Steps to reproduce the bug I use the following code to fine-tune Falcon on my private dataset. ```python import transformers from transformers import ( AutoModelForCausalLM...
false
1,839,746,721
https://api.github.com/repos/huggingface/datasets/issues/6127
https://github.com/huggingface/datasets/pull/6127
6,127
Fix authentication issues
closed
8
2023-08-07T15:41:25
2023-08-08T15:24:59
2023-08-08T15:16:22
albertvillanova
[]
This PR fixes 3 authentication issues: - Fix authentication when passing `token`. - Fix authentication in `Audio.decode_example` and `Image.decode_example`. - Fix authentication to resolve `data_files` in repositories without script. This PR also fixes our CI so that we properly test when passing `token` and we d...
true
1,839,675,320
https://api.github.com/repos/huggingface/datasets/issues/6126
https://github.com/huggingface/datasets/issues/6126
6,126
Private datasets do not load when passing token
closed
4
2023-08-07T15:06:47
2023-08-08T15:16:23
2023-08-08T15:16:23
albertvillanova
[ "bug" ]
### Describe the bug Since the release of `datasets` 2.14, private/gated datasets do not load when passing `token`: they raise `EmptyDatasetError`. This is a non-planned backward incompatible breaking change. Note that private datasets do load if instead `download_config` is passed: ```python from datasets i...
false
1,837,980,986
https://api.github.com/repos/huggingface/datasets/issues/6125
https://github.com/huggingface/datasets/issues/6125
6,125
Reinforcement Learning and Robotics are not task categories in HF datasets metadata
closed
0
2023-08-05T23:59:42
2023-08-18T12:28:42
2023-08-18T12:28:42
StoneT2000
[]
### Describe the bug In https://huggingface.co/models there are task categories for RL and robotics but none in https://huggingface.co/datasets Our lab is currently moving our datasets over to hugging face and would like to be able to add those 2 tags Moreover we see some older datasets that do have that tag, bu...
false
1,837,868,112
https://api.github.com/repos/huggingface/datasets/issues/6124
https://github.com/huggingface/datasets/issues/6124
6,124
Datasets crashing runs due to KeyError
closed
7
2023-08-05T17:48:56
2023-11-30T16:28:57
2023-11-30T16:28:57
conceptofmind
[]
### Describe the bug Hi all, I have been running into a pretty persistent issue recently when trying to load datasets. ```python train_dataset = load_dataset( 'llama-2-7b-tokenized', split = 'train' ) ``` I receive a KeyError which crashes the runs. ``` Traceback (most recent call...
false
1,837,789,294
https://api.github.com/repos/huggingface/datasets/issues/6123
https://github.com/huggingface/datasets/issues/6123
6,123
Inaccurate Bounding Boxes in "wildreceipt" Dataset
closed
1
2023-08-05T14:34:13
2023-08-17T14:25:27
2023-08-17T14:25:26
HamzaGbada
[]
### Describe the bug I would like to bring to your attention an issue related to the accuracy of bounding boxes within the "wildreceipt" dataset, which is made available through the Hugging Face API. Specifically, I have identified a discrepancy between the bounding boxes generated by the dataset loading commands, n...
false
1,837,335,721
https://api.github.com/repos/huggingface/datasets/issues/6122
https://github.com/huggingface/datasets/issues/6122
6,122
Upload README via `push_to_hub`
closed
1
2023-08-04T21:00:27
2023-08-21T18:18:54
2023-08-21T18:18:54
liyucheng09
[ "enhancement" ]
### Feature request `push_to_hub` now allows users to upload datasets programmatically. However, based on the latest doc, we still need to open the dataset page to add readme file manually. However, I do discover snippets to intialize a README for every `push_to_hub`: ``` dataset_card = ( DatasetCard( ...
false
1,836,761,712
https://api.github.com/repos/huggingface/datasets/issues/6121
https://github.com/huggingface/datasets/pull/6121
6,121
Small typo in the code example of create imagefolder dataset
closed
1
2023-08-04T13:36:59
2023-08-04T13:45:32
2023-08-04T13:41:43
WangXin93
[]
Fix type of code example of load imagefolder dataset
true
1,836,026,938
https://api.github.com/repos/huggingface/datasets/issues/6120
https://github.com/huggingface/datasets/issues/6120
6,120
Lookahead streaming support?
open
1
2023-08-04T04:01:52
2023-08-17T17:48:42
null
PicoCreator
[ "enhancement" ]
### Feature request From what I understand, streaming dataset currently pulls the data, and process the data as it is requested. This can introduce significant latency delays when data is loaded into the training process, needing to wait for each segment. While the delays might be dataset specific (or even mappi...
false
1,835,996,350
https://api.github.com/repos/huggingface/datasets/issues/6119
https://github.com/huggingface/datasets/pull/6119
6,119
[Docs] Add description of `select_columns` to guide
closed
2
2023-08-04T03:13:30
2023-08-16T10:13:02
2023-08-16T10:02:52
unifyh
[]
Closes #6116
true
1,835,940,417
https://api.github.com/repos/huggingface/datasets/issues/6118
https://github.com/huggingface/datasets/issues/6118
6,118
IterableDataset.from_generator() fails with pickle error when provided a generator or iterator
open
3
2023-08-04T01:45:04
2024-12-18T18:30:57
null
finkga
[]
### Describe the bug **Description** Providing a generator in an instantiation of IterableDataset.from_generator() fails with `TypeError: cannot pickle 'generator' object` when the generator argument is supplied with a generator. **Code example** ``` def line_generator(files: List[Path]): if isinstance(f...
false
1,835,213,848
https://api.github.com/repos/huggingface/datasets/issues/6117
https://github.com/huggingface/datasets/pull/6117
6,117
Set dev version
closed
3
2023-08-03T14:46:04
2023-08-03T14:56:59
2023-08-03T14:46:18
albertvillanova
[]
null
true
1,835,098,484
https://api.github.com/repos/huggingface/datasets/issues/6116
https://github.com/huggingface/datasets/issues/6116
6,116
[Docs] The "Process" how-to guide lacks description of `select_columns` function
closed
1
2023-08-03T13:45:10
2023-08-16T10:02:53
2023-08-16T10:02:53
unifyh
[ "enhancement" ]
### Feature request The [how to process dataset guide](https://huggingface.co/docs/datasets/main/en/process) currently does not mention the [`select_columns`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.select_columns) function. It would be nice to include it in the gui...
false
1,834,765,485
https://api.github.com/repos/huggingface/datasets/issues/6115
https://github.com/huggingface/datasets/pull/6115
6,115
Release: 2.14.3
closed
6
2023-08-03T10:18:32
2023-08-03T15:08:02
2023-08-03T10:24:57
albertvillanova
[]
null
true
1,834,015,584
https://api.github.com/repos/huggingface/datasets/issues/6114
https://github.com/huggingface/datasets/issues/6114
6,114
Cache not being used when loading commonvoice 8.0.0
closed
2
2023-08-02T23:18:11
2023-08-18T23:59:00
2023-08-18T23:59:00
clabornd
[]
### Describe the bug I have commonvoice 8.0.0 downloaded in `~/.cache/huggingface/datasets/mozilla-foundation___common_voice_8_0/en/8.0.0/b2f8b72f8f30b2e98c41ccf855954d9e35a5fa498c43332df198534ff9797a4a`. The folder contains all the arrow files etc, and was used as the cached version last time I touched the ec2 ins...
false
1,833,854,030
https://api.github.com/repos/huggingface/datasets/issues/6113
https://github.com/huggingface/datasets/issues/6113
6,113
load_dataset() fails with streamlit caching inside docker
closed
1
2023-08-02T20:20:26
2023-08-21T18:18:27
2023-08-21T18:18:27
fierval
[]
### Describe the bug When calling `load_dataset` in a streamlit application running within a docker container, get a failure with the error message: EmptyDatasetError: The directory at hf://datasets/fetch-rewards/inc-rings-2000@bea27cf60842b3641eae418f38864a2ec4cde684 doesn't contain any data files Traceback: Fil...
false
1,833,693,299
https://api.github.com/repos/huggingface/datasets/issues/6112
https://github.com/huggingface/datasets/issues/6112
6,112
yaml error using push_to_hub with generated README.md
closed
1
2023-08-02T18:21:21
2023-12-12T15:00:44
2023-12-12T15:00:44
kevintee
[]
### Describe the bug When I construct a dataset with the following features: ``` features = Features( { "pixel_values": Array3D(dtype="float64", shape=(3, 224, 224)), "input_ids": Sequence(feature=Value(dtype="int64")), "attention_mask": Sequence(Value(dtype="int64")), "token...
false
1,832,781,654
https://api.github.com/repos/huggingface/datasets/issues/6111
https://github.com/huggingface/datasets/issues/6111
6,111
raise FileNotFoundError("Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." )
closed
3
2023-08-02T09:17:29
2023-08-29T02:00:28
2023-08-29T02:00:28
2catycm
[]
### Describe the bug For researchers in some countries or regions, it is usually the case that the download ability of `load_dataset` is disabled due to the complex network environment. People in these regions often prefer to use git clone or other programming tricks to manually download the files to the disk (for exa...
false
1,831,110,633
https://api.github.com/repos/huggingface/datasets/issues/6110
https://github.com/huggingface/datasets/issues/6110
6,110
[BUG] Dataset initialized from in-memory data does not create cache.
closed
1
2023-08-01T11:58:58
2023-08-17T14:03:01
2023-08-17T14:03:00
MattYoon
[]
### Describe the bug `Dataset` initialized from in-memory data (dictionary in my case, haven't tested with other types) does not create cache when processed with the `map` method, unlike `Dataset` initialized by other methods such as `load_dataset`. ### Steps to reproduce the bug ```python # below code was ru...
false
1,830,753,793
https://api.github.com/repos/huggingface/datasets/issues/6109
https://github.com/huggingface/datasets/issues/6109
6,109
Problems in downloading Amazon reviews from HF
closed
3
2023-08-01T08:38:29
2025-07-18T17:47:30
2023-08-02T07:12:07
610v4nn1
[]
### Describe the bug I have a script downloading `amazon_reviews_multi`. When the download starts, I get ``` Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data: 243B [00:00, 1.43MB/s] Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.54s/it] Extracting data files: 100%...
false
1,830,347,187
https://api.github.com/repos/huggingface/datasets/issues/6108
https://github.com/huggingface/datasets/issues/6108
6,108
Loading local datasets got strangely stuck
open
7
2023-08-01T02:28:06
2024-12-31T16:01:00
null
LoveCatc
[]
### Describe the bug I try to use `load_dataset()` to load several local `.jsonl` files as a dataset. Every line of these files is a json structure only containing one key `text` (yeah it is a dataset for NLP model). The code snippet is as: ```python ds = load_dataset("json", data_files=LIST_OF_FILE_PATHS, num_proc=...
false
1,829,625,320
https://api.github.com/repos/huggingface/datasets/issues/6107
https://github.com/huggingface/datasets/pull/6107
6,107
Fix deprecation of use_auth_token in file_utils
closed
3
2023-07-31T16:32:01
2023-08-03T10:13:32
2023-08-03T10:04:18
albertvillanova
[]
Fix issues with the deprecation of `use_auth_token` introduced by: - #5996 in functions: - `get_authentication_headers_for_url` - `request_etag` - `get_from_cache` Currently, `TypeError` is raised: https://github.com/huggingface/datasets-server/actions/runs/5711650666/job/15484685570?pr=1588 ``` FAILED tes...
true
1,829,131,223
https://api.github.com/repos/huggingface/datasets/issues/6106
https://github.com/huggingface/datasets/issues/6106
6,106
load local json_file as dataset
closed
2
2023-07-31T12:53:49
2023-08-18T01:46:35
2023-08-18T01:46:35
CiaoHe
[]
### Describe the bug I tried to load local json file as dataset but failed to parsing json file because some columns are 'float' type. ### Steps to reproduce the bug 1. load json file with certain columns are 'float' type. For example `data = load_data("json", data_files=JSON_PATH)` 2. Then, the error will be trigg...
false
1,829,008,430
https://api.github.com/repos/huggingface/datasets/issues/6105
https://github.com/huggingface/datasets/pull/6105
6,105
Fix error when loading from GCP bucket
closed
5
2023-07-31T11:44:46
2023-08-01T10:48:52
2023-08-01T10:38:54
albertvillanova
[]
Fix `resolve_pattern` for filesystems with tuple protocol. Fix #6100. The bug code lines were introduced by: - #6028
true
1,828,959,107
https://api.github.com/repos/huggingface/datasets/issues/6104
https://github.com/huggingface/datasets/issues/6104
6,104
HF Datasets data access is extremely slow even when in memory
open
1
2023-07-31T11:12:19
2023-08-01T11:22:43
null
NightMachinery
[]
### Describe the bug Doing a simple `some_dataset[:10]` can take more than a minute. Profiling it: <img width="1280" alt="image" src="https://github.com/huggingface/datasets/assets/36224762/e641fb95-ff02-4072-9016-5416a65f75ab"> `some_dataset` is completely in memory with no disk cache. This is proving fat...
false
1,828,515,165
https://api.github.com/repos/huggingface/datasets/issues/6103
https://github.com/huggingface/datasets/pull/6103
6,103
Set dev version
closed
3
2023-07-31T06:44:05
2023-07-31T06:55:58
2023-07-31T06:45:41
albertvillanova
[]
null
true
1,828,494,896
https://api.github.com/repos/huggingface/datasets/issues/6102
https://github.com/huggingface/datasets/pull/6102
6,102
Release 2.14.2
closed
4
2023-07-31T06:27:47
2023-07-31T06:48:09
2023-07-31T06:32:58
albertvillanova
[]
null
true
1,828,469,648
https://api.github.com/repos/huggingface/datasets/issues/6101
https://github.com/huggingface/datasets/pull/6101
6,101
Release 2.14.2
closed
3
2023-07-31T06:05:36
2023-07-31T06:33:00
2023-07-31T06:18:17
albertvillanova
[]
null
true
1,828,118,930
https://api.github.com/repos/huggingface/datasets/issues/6100
https://github.com/huggingface/datasets/issues/6100
6,100
TypeError when loading from GCP bucket
closed
2
2023-07-30T23:03:00
2023-08-03T10:00:48
2023-08-01T10:38:55
bilelomrani1
[]
### Describe the bug Loading a dataset from a GCP bucket raises a type error. This bug was introduced recently (either in 2.14 or 2.14.1), and appeared during a migration from 2.13.1. ### Steps to reproduce the bug Load any file from a GCP bucket: ```python import datasets datasets.load_dataset("json", data_f...
false
1,827,893,576
https://api.github.com/repos/huggingface/datasets/issues/6099
https://github.com/huggingface/datasets/issues/6099
6,099
How do i get "amazon_us_reviews
closed
10
2023-07-30T11:02:17
2023-08-21T05:08:08
2023-08-10T05:02:35
IqraBaluch
[ "enhancement" ]
### Feature request I have been trying to load 'amazon_us_dataset" but unable to do so. `amazon_us_reviews = load_dataset('amazon_us_reviews')` `print(amazon_us_reviews)` > [ValueError: Config name is missing. Please pick one among the available configs: ['Wireless_v1_00', 'Watches_v1_00', 'Video_Games_v1...
false
1,827,655,071
https://api.github.com/repos/huggingface/datasets/issues/6098
https://github.com/huggingface/datasets/pull/6098
6,098
Expanduser in save_to_disk()
closed
3
2023-07-29T20:50:45
2023-10-27T14:14:11
2023-10-27T14:04:36
Unknown3141592
[]
Fixes #5651. The same problem occurs when loading from disk so I fixed it there too. I am not sure why the case distinction between local and remote filesystems is even necessary for `DatasetDict` when saving to disk. Imo this could be removed (leaving only `fs.makedirs(dataset_dict_path, exist_ok=True)`).
true
1,827,054,143
https://api.github.com/repos/huggingface/datasets/issues/6097
https://github.com/huggingface/datasets/issues/6097
6,097
Dataset.get_nearest_examples does not return all feature values for the k most similar datapoints - side effect of Dataset.set_format
closed
1
2023-07-28T20:31:59
2023-07-28T20:49:58
2023-07-28T20:49:58
aschoenauer-sebag
[]
### Describe the bug Hi team! I observe that there seems to be a side effect of `Dataset.set_format`: after setting a format and creating a FAISS index, the method `get_nearest_examples` from the `Dataset` class, fails to retrieve anything else but the embeddings themselves - not super useful. This is not the case ...
false
1,826,731,091
https://api.github.com/repos/huggingface/datasets/issues/6096
https://github.com/huggingface/datasets/pull/6096
6,096
Add `fsspec` support for `to_json`, `to_csv`, and `to_parquet`
closed
5
2023-07-28T16:36:59
2024-05-28T07:40:30
2024-03-06T11:12:42
alvarobartt
[]
Hi to whoever is reading this! 🤗 (Most likely @mariosasko) ## What's in this PR? This PR replaces the `open` from Python with `fsspec.open` and adds the argument `storage_options` for the methods `to_json`, `to_csv`, and `to_parquet`, to allow users to export any 🤗`Dataset` into a file in a file-system as reque...
true
1,826,496,967
https://api.github.com/repos/huggingface/datasets/issues/6095
https://github.com/huggingface/datasets/pull/6095
6,095
Fix deprecation of errors in TextConfig
closed
3
2023-07-28T14:08:37
2023-07-31T05:26:32
2023-07-31T05:17:38
albertvillanova
[]
This PR fixes an issue with the deprecation of `errors` in `TextConfig` introduced by: - #5974 ```python In [1]: ds = load_dataset("text", data_files="test.txt", errors="strict") --------------------------------------------------------------------------- TypeError Traceback (most ...
true
1,826,293,414
https://api.github.com/repos/huggingface/datasets/issues/6094
https://github.com/huggingface/datasets/pull/6094
6,094
Fix deprecation of use_auth_token in DownloadConfig
closed
3
2023-07-28T11:52:21
2023-07-31T05:08:41
2023-07-31T04:59:50
albertvillanova
[]
This PR fixes an issue with the deprecation of `use_auth_token` in `DownloadConfig` introduced by: - #5996 ```python In [1]: from datasets import DownloadConfig In [2]: DownloadConfig(use_auth_token=False) --------------------------------------------------------------------------- TypeError ...
true
1,826,210,490
https://api.github.com/repos/huggingface/datasets/issues/6093
https://github.com/huggingface/datasets/pull/6093
6,093
Deprecate `download_custom`
closed
6
2023-07-28T10:49:06
2023-08-21T17:51:34
2023-07-28T11:30:02
mariosasko
[]
Deprecate `DownloadManager.download_custom`. Users should use `fsspec` URLs (cacheable) or make direct requests with `fsspec`/`requests` (not cacheable) instead. We should deprecate this method as it's not compatible with streaming, and implementing the streaming version of it is hard/impossible. There have been req...
true
1,826,111,806
https://api.github.com/repos/huggingface/datasets/issues/6092
https://github.com/huggingface/datasets/pull/6092
6,092
Minor fix in `iter_files` for hidden files
closed
3
2023-07-28T09:50:12
2023-07-28T10:59:28
2023-07-28T10:50:10
mariosasko
[]
Fix #6090
true
1,826,086,487
https://api.github.com/repos/huggingface/datasets/issues/6091
https://github.com/huggingface/datasets/pull/6091
6,091
Bump fsspec from 2021.11.1 to 2022.3.0
closed
3
2023-07-28T09:37:15
2023-07-28T10:16:11
2023-07-28T10:07:02
mariosasko
[]
Fix https://github.com/huggingface/datasets/issues/6087 (Colab installs 2023.6.0, so we should be good)
true
1,825,865,043
https://api.github.com/repos/huggingface/datasets/issues/6090
https://github.com/huggingface/datasets/issues/6090
6,090
FilesIterable skips all the files after a hidden file
closed
1
2023-07-28T07:25:57
2023-07-28T10:51:14
2023-07-28T10:50:11
dkrivosic
[]
### Describe the bug When initializing `FilesIterable` with a list of file paths using `FilesIterable.from_paths`, it will discard all the files after a hidden file. The problem is in [this line](https://github.com/huggingface/datasets/blob/88896a7b28610ace95e444b94f9a4bc332cc1ee3/src/datasets/download/download_manag...
false
1,825,761,476
https://api.github.com/repos/huggingface/datasets/issues/6089
https://github.com/huggingface/datasets/issues/6089
6,089
AssertionError: daemonic processes are not allowed to have children
open
2
2023-07-28T06:04:00
2023-07-31T02:34:02
null
codingl2k1
[]
### Describe the bug When I load_dataset with num_proc > 0 in a deamon process, I got an error: ```python File "/Users/codingl2k1/Work/datasets/src/datasets/download/download_manager.py", line 564, in download_and_extract return self.extract(self.download(url_or_urls)) ^^^^^^^^^^^^^^^^^ File "/Users...
false
1,825,665,235
https://api.github.com/repos/huggingface/datasets/issues/6088
https://github.com/huggingface/datasets/issues/6088
6,088
Loading local data files initiates web requests
closed
0
2023-07-28T04:06:26
2023-07-28T05:02:22
2023-07-28T05:02:22
lytning98
[]
As documented in the [official docs](https://huggingface.co/docs/datasets/v2.14.0/en/package_reference/loading_methods#datasets.load_dataset.example-2), I tried to load datasets from local files by ```python # Load a JSON file from datasets import load_dataset ds = load_dataset('json', data_files='path/to/local/my_...
false
1,825,133,741
https://api.github.com/repos/huggingface/datasets/issues/6087
https://github.com/huggingface/datasets/issues/6087
6,087
fsspec dependency is set too low
closed
1
2023-07-27T20:08:22
2023-07-28T10:07:56
2023-07-28T10:07:03
iXce
[]
### Describe the bug fsspec.callbacks.TqdmCallback (used in https://github.com/huggingface/datasets/blob/73bed12ecda17d1573fd3bf73ed5db24d3622f86/src/datasets/utils/file_utils.py#L338) was first released in fsspec [2022.3.0](https://github.com/fsspec/filesystem_spec/releases/tag/2022.3.0, commit where it was added: ht...
false
1,825,009,268
https://api.github.com/repos/huggingface/datasets/issues/6086
https://github.com/huggingface/datasets/issues/6086
6,086
Support `fsspec` in `Dataset.to_<format>` methods
closed
5
2023-07-27T19:08:37
2024-03-07T07:22:43
2024-03-07T07:22:42
mariosasko
[ "enhancement" ]
Supporting this should be fairly easy. Requested on the forum [here](https://discuss.huggingface.co/t/how-can-i-convert-a-loaded-dataset-in-to-a-parquet-file-and-save-it-to-the-s3/48353).
false
1,824,985,188
https://api.github.com/repos/huggingface/datasets/issues/6085
https://github.com/huggingface/datasets/pull/6085
6,085
Fix `fsspec` download
open
3
2023-07-27T18:54:47
2023-07-27T19:06:13
null
mariosasko
[]
Testing `ds = load_dataset("audiofolder", data_files="s3://datasets.huggingface.co/SpeechCommands/v0.01/v0.01_test.tar.gz", storage_options={"anon": True})` and trying to fix the issues raised by `fsspec` ... TODO: fix ``` self.session = aiobotocore.session.AioSession(**self.kwargs) TypeError: __init__() got ...
true
1,824,896,761
https://api.github.com/repos/huggingface/datasets/issues/6084
https://github.com/huggingface/datasets/issues/6084
6,084
Changing pixel values of images in the Winoground dataset
open
0
2023-07-27T17:55:35
2023-07-27T17:55:35
null
ZitengWangNYU
[]
Hi, as I followed the instructions, with lasted "datasets" version: " from datasets import load_dataset examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>) " I got slightly different datasets in colab and in my hpc environment. Specifically, the pixel values of images are slight...
false
1,824,832,348
https://api.github.com/repos/huggingface/datasets/issues/6083
https://github.com/huggingface/datasets/pull/6083
6,083
set dev version
closed
3
2023-07-27T17:10:41
2023-07-27T17:22:05
2023-07-27T17:11:01
lhoestq
[]
null
true
1,824,819,672
https://api.github.com/repos/huggingface/datasets/issues/6082
https://github.com/huggingface/datasets/pull/6082
6,082
Release: 2.14.1
closed
6
2023-07-27T17:05:54
2023-07-31T06:32:16
2023-07-27T17:08:38
lhoestq
[]
null
true
1,824,486,278
https://api.github.com/repos/huggingface/datasets/issues/6081
https://github.com/huggingface/datasets/pull/6081
6,081
Deprecate `Dataset.export`
closed
2
2023-07-27T14:22:18
2023-07-28T11:09:54
2023-07-28T11:01:04
mariosasko
[]
Deprecate `Dataset.export` that generates a TFRecord file from a dataset as this method is undocumented, and the usage seems low. Users should use [TFRecordWriter](https://www.tensorflow.org/api_docs/python/tf/io/TFRecordWriter#write) or the official [TFRecord](https://www.tensorflow.org/tutorials/load_data/tfrecord) t...
true
1,822,667,554
https://api.github.com/repos/huggingface/datasets/issues/6080
https://github.com/huggingface/datasets/pull/6080
6,080
Remove README link to deprecated Colab notebook
closed
3
2023-07-26T15:27:49
2023-07-26T16:24:43
2023-07-26T16:14:34
mariosasko
[]
null
true
1,822,597,471
https://api.github.com/repos/huggingface/datasets/issues/6079
https://github.com/huggingface/datasets/issues/6079
6,079
Iterating over DataLoader based on HF datasets is stuck forever
closed
15
2023-07-26T14:52:37
2024-02-07T17:46:52
2023-07-30T14:09:06
arindamsarkar93
[]
### Describe the bug I am using Amazon Sagemaker notebook (Amazon Linux 2) with python 3.10 based Conda environment. I have a dataset in parquet format locally. When I try to iterate over it, the loader is stuck forever. Note that the same code is working for python 3.6 based conda environment seamlessly. What shou...
false
1,822,501,472
https://api.github.com/repos/huggingface/datasets/issues/6078
https://github.com/huggingface/datasets/issues/6078
6,078
resume_download with streaming=True
closed
3
2023-07-26T14:08:22
2023-07-28T11:05:03
2023-07-28T11:05:03
NicolasMICAUX
[]
### Describe the bug I used: ``` dataset = load_dataset( "oscar-corpus/OSCAR-2201", token=True, language="fr", streaming=True, split="train" ) ``` Unfortunately, the server had a problem during the training process. I saved the step my training stopped at. But how can I resume download f...
false
1,822,486,810
https://api.github.com/repos/huggingface/datasets/issues/6077
https://github.com/huggingface/datasets/issues/6077
6,077
Mapping gets stuck at 99%
open
6
2023-07-26T14:00:40
2024-07-22T12:28:06
null
Laurent2916
[]
### Describe the bug Hi ! I'm currently working with a large (~150GB) unnormalized dataset at work. The dataset is available on a read-only filesystem internally, and I use a [loading script](https://huggingface.co/docs/datasets/dataset_script) to retreive it. I want to normalize the features of the dataset, ...
false
1,822,345,597
https://api.github.com/repos/huggingface/datasets/issues/6076
https://github.com/huggingface/datasets/pull/6076
6,076
No gzip encoding from github
closed
3
2023-07-26T12:46:07
2023-07-27T16:15:11
2023-07-27T16:14:40
lhoestq
[]
Don't accept gzip encoding from github, otherwise some files are not streamable + seekable. fix https://huggingface.co/datasets/code_x_glue_cc_code_to_code_trans/discussions/2#64c0e0c1a04a514ba6303e84 and making sure https://github.com/huggingface/datasets/issues/2918 works as well
true
1,822,341,398
https://api.github.com/repos/huggingface/datasets/issues/6075
https://github.com/huggingface/datasets/issues/6075
6,075
Error loading music files using `load_dataset`
closed
2
2023-07-26T12:44:05
2023-07-26T13:08:08
2023-07-26T13:08:08
susnato
[]
### Describe the bug I tried to load a music file using `datasets.load_dataset()` from the repository - https://huggingface.co/datasets/susnato/pop2piano_real_music_test I got the following error - ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/susnato/anaconda3/en...
false
1,822,299,128
https://api.github.com/repos/huggingface/datasets/issues/6074
https://github.com/huggingface/datasets/pull/6074
6,074
Misc doc improvements
closed
3
2023-07-26T12:20:54
2023-07-27T16:16:28
2023-07-27T16:16:02
mariosasko
[]
Removes the warning about requiring to write a dataset loading script to define multiple configurations, as the README YAML can be used instead (for simple cases). Also, deletes the section about using the `BatchSampler` in `torch<=1.12.1` to speed up loading, as `torch 1.12.1` is over a year old (and `torch 2.0` has b...
true
1,822,167,804
https://api.github.com/repos/huggingface/datasets/issues/6073
https://github.com/huggingface/datasets/issues/6073
6,073
version2.3.2 load_dataset()data_files can't include .xxxx in path
closed
1
2023-07-26T11:09:31
2023-08-29T15:53:59
2023-08-29T15:53:59
BUAAChuanWang
[]
### Describe the bug First, I cd workdir. Then, I just use load_dataset("json", data_file={"train":"/a/b/c/.d/train/train.json", "test":"/a/b/c/.d/train/test.json"}) that couldn't work and <FileNotFoundError: Unable to find '/a/b/c/.d/train/train.jsonl' at /a/b/c/.d/> And I debug, it is fine in version2.1.2...
false
1,822,123,560
https://api.github.com/repos/huggingface/datasets/issues/6072
https://github.com/huggingface/datasets/pull/6072
6,072
Fix fsspec storage_options from load_dataset
closed
6
2023-07-26T10:44:23
2023-07-27T12:51:51
2023-07-27T12:42:57
lhoestq
[]
close https://github.com/huggingface/datasets/issues/6071
true
1,821,990,749
https://api.github.com/repos/huggingface/datasets/issues/6071
https://github.com/huggingface/datasets/issues/6071
6,071
storage_options provided to load_dataset not fully piping through since datasets 2.14.0
closed
2
2023-07-26T09:37:20
2023-07-27T12:42:58
2023-07-27T12:42:58
exs-avianello
[]
### Describe the bug Since the latest release of `datasets` (`2.14.0`), custom filesystem `storage_options` passed to `load_dataset()` do not seem to propagate through all the way - leading to problems if loading data files that need those options to be set. I think this is because of the new `_prepare_path_and_sto...
false
1,820,836,330
https://api.github.com/repos/huggingface/datasets/issues/6070
https://github.com/huggingface/datasets/pull/6070
6,070
Fix Quickstart notebook link
closed
3
2023-07-25T17:48:37
2023-07-25T18:19:01
2023-07-25T18:10:16
mariosasko
[]
Reported in https://github.com/huggingface/datasets/pull/5902#issuecomment-1649885621 (cc @alvarobartt)
true
1,820,831,535
https://api.github.com/repos/huggingface/datasets/issues/6069
https://github.com/huggingface/datasets/issues/6069
6,069
KeyError: dataset has no key "image"
closed
7
2023-07-25T17:45:50
2024-09-06T08:16:16
2023-07-27T12:42:17
etetteh
[]
### Describe the bug I've loaded a local image dataset with: `ds = laod_dataset("imagefolder", data_dir=path-to-data)` And defined a transform to process the data, following the Datasets docs. However, I get a keyError error, indicating there's no "image" key in my dataset. When I printed out the example_batch ...
false
1,820,106,952
https://api.github.com/repos/huggingface/datasets/issues/6068
https://github.com/huggingface/datasets/pull/6068
6,068
fix tqdm lock deletion
closed
5
2023-07-25T11:17:25
2023-07-25T15:29:39
2023-07-25T15:17:50
lhoestq
[]
related to https://github.com/huggingface/datasets/issues/6066
true
1,819,919,025
https://api.github.com/repos/huggingface/datasets/issues/6067
https://github.com/huggingface/datasets/pull/6067
6,067
fix tqdm lock
closed
3
2023-07-25T09:32:16
2023-07-25T10:02:43
2023-07-25T09:54:12
lhoestq
[]
close https://github.com/huggingface/datasets/issues/6066
true
1,819,717,542
https://api.github.com/repos/huggingface/datasets/issues/6066
https://github.com/huggingface/datasets/issues/6066
6,066
AttributeError: '_tqdm_cls' object has no attribute '_lock'
closed
7
2023-07-25T07:24:36
2023-07-26T10:56:25
2023-07-26T10:56:24
codingl2k1
[]
### Describe the bug ```python File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/load.py", line 1034, in get_module data_files = DataFilesDict.from_patterns( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-p...
false
1,819,334,932
https://api.github.com/repos/huggingface/datasets/issues/6065
https://github.com/huggingface/datasets/pull/6065
6,065
Add column type guessing from map return function
closed
5
2023-07-25T00:34:17
2023-07-26T15:13:45
2023-07-26T15:13:44
piercefreeman
[]
As discussed [here](https://github.com/huggingface/datasets/issues/5965), there are some cases where datasets is unable to automatically promote columns during mapping. The fix is to explicitly provide a `features` definition so pyarrow can configure itself with the right column types from the outset. This PR provid...
true