id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,668,333,316
https://api.github.com/repos/huggingface/datasets/issues/5751
https://github.com/huggingface/datasets/pull/5751
5,751
Consistent ArrayXD Python formatting + better NumPy/Pandas formatting
closed
4
2023-04-14T14:13:59
2023-04-20T14:43:20
2023-04-20T14:40:34
mariosasko
[]
Return a list of lists instead of a list of NumPy arrays when converting the variable-shaped `ArrayXD` to Python. Additionally, improve the NumPy conversion by returning a numeric NumPy array when the offsets are equal or a NumPy object array when they aren't, and allow converting the variable-shaped `ArrayXD` to Panda...
true
1,668,289,067
https://api.github.com/repos/huggingface/datasets/issues/5750
https://github.com/huggingface/datasets/issues/5750
5,750
Fail to create datasets from a generator when using Google Big Query
closed
4
2023-04-14T13:50:59
2023-04-17T12:20:43
2023-04-17T12:20:43
ivanprado
[]
### Describe the bug Creating a dataset from a generator using `Dataset.from_generator()` fails if the generator is the [Google Big Query Python client](https://cloud.google.com/python/docs/reference/bigquery/latest). The problem is that the Big Query client is not pickable. And the function `create_config_id` tries t...
false
1,668,016,321
https://api.github.com/repos/huggingface/datasets/issues/5749
https://github.com/huggingface/datasets/issues/5749
5,749
AttributeError: 'Version' object has no attribute 'match'
closed
8
2023-04-14T10:48:06
2023-06-30T11:31:17
2023-04-18T12:57:08
gulnaz-zh
[]
### Describe the bug When I run from datasets import load_dataset data = load_dataset("visual_genome", 'region_descriptions_v1.2.0') AttributeError: 'Version' object has no attribute 'match' ### Steps to reproduce the bug from datasets import load_dataset data = load_dataset("visual_genome", 'region_descripti...
false
1,667,517,024
https://api.github.com/repos/huggingface/datasets/issues/5748
https://github.com/huggingface/datasets/pull/5748
5,748
[BUG FIX] Issue 5739
open
0
2023-04-14T05:07:31
2023-04-14T05:07:31
null
airlsyn
[]
A fix for https://github.com/huggingface/datasets/issues/5739
true
1,667,270,412
https://api.github.com/repos/huggingface/datasets/issues/5747
https://github.com/huggingface/datasets/pull/5747
5,747
[WIP] Add Dataset.to_spark
closed
0
2023-04-13T23:20:03
2024-01-08T18:31:50
2024-01-08T18:31:50
maddiedawson
[]
null
true
1,667,102,459
https://api.github.com/repos/huggingface/datasets/issues/5746
https://github.com/huggingface/datasets/pull/5746
5,746
Fix link in docs
closed
2
2023-04-13T20:45:19
2023-04-14T13:15:38
2023-04-14T13:08:42
bbbxyz
[]
Fixes a broken link in the use_with_pytorch docs
true
1,667,086,143
https://api.github.com/repos/huggingface/datasets/issues/5745
https://github.com/huggingface/datasets/pull/5745
5,745
[BUG FIX] Issue 5744
open
3
2023-04-13T20:29:55
2023-04-21T15:22:43
null
keyboardAnt
[]
A temporal fix for https://github.com/huggingface/datasets/issues/5744.
true
1,667,076,620
https://api.github.com/repos/huggingface/datasets/issues/5744
https://github.com/huggingface/datasets/issues/5744
5,744
[BUG] With Pandas 2.0.0, `load_dataset` raises `TypeError: read_csv() got an unexpected keyword argument 'mangle_dupe_cols'`
closed
6
2023-04-13T20:21:28
2024-04-09T16:13:59
2023-07-06T17:01:59
keyboardAnt
[]
The `load_dataset` function with Pandas `1.5.3` has no issue (just a FutureWarning) but crashes with Pandas `2.0.0`. For your convenience, I opened a draft Pull Request to fix it quickly: https://github.com/huggingface/datasets/pull/5745 --- * The FutureWarning mentioned above: ``` FutureWarning: the 'mangle_...
false
1,666,843,832
https://api.github.com/repos/huggingface/datasets/issues/5743
https://github.com/huggingface/datasets/issues/5743
5,743
dataclass.py in virtual environment is overriding the stdlib module "dataclasses"
closed
1
2023-04-13T17:28:33
2023-04-17T12:23:18
2023-04-17T12:23:18
syedabdullahhassan
[]
### Describe the bug "e:\Krish_naik\FSDSRegression\venv\Lib\dataclasses.py" is overriding the stdlib module "dataclasses" ### Steps to reproduce the bug module issue ### Expected behavior overriding the stdlib module "dataclasses" ### Environment info VS code
false
1,666,209,738
https://api.github.com/repos/huggingface/datasets/issues/5742
https://github.com/huggingface/datasets/pull/5742
5,742
Warning specifying future change in to_tf_dataset behaviour
closed
2
2023-04-13T11:10:00
2023-04-21T13:18:14
2023-04-21T13:11:09
amyeroberts
[]
Warning specifying future changes happening to `to_tf_dataset` behaviour when #5602 is merged in
true
1,665,860,919
https://api.github.com/repos/huggingface/datasets/issues/5741
https://github.com/huggingface/datasets/pull/5741
5,741
Fix CI warnings
closed
2
2023-04-13T07:17:02
2023-04-13T09:48:10
2023-04-13T09:40:50
albertvillanova
[]
Fix warnings in our CI tests.
true
1,664,132,130
https://api.github.com/repos/huggingface/datasets/issues/5740
https://github.com/huggingface/datasets/pull/5740
5,740
Fix CI mock filesystem fixtures
closed
5
2023-04-12T08:52:35
2023-04-13T11:01:24
2023-04-13T10:54:13
albertvillanova
[]
This PR fixes the fixtures of our CI mock filesystems. Before, we had to pass `clobber=True` to `fsspec.register_implementation` to overwrite the still present previously added "mock" filesystem. That meant that the mock filesystem fixture was not working properly, because the previously added "mock" filesystem, sho...
true
1,663,762,901
https://api.github.com/repos/huggingface/datasets/issues/5739
https://github.com/huggingface/datasets/issues/5739
5,739
weird result during dataset split when data path starts with `/data`
open
4
2023-04-12T04:51:35
2023-04-21T14:20:59
null
airlsyn
[]
### Describe the bug The regex defined here https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/utils/py_utils.py#L158 will cause a weird result during dataset split when data path starts with `/data` ### Steps to reproduce the bug 1. clone dataset into local path ...
false
1,663,477,690
https://api.github.com/repos/huggingface/datasets/issues/5738
https://github.com/huggingface/datasets/issues/5738
5,738
load_dataset("text","dataset.txt") loads the wrong dataset!
closed
1
2023-04-12T01:07:46
2023-04-19T12:08:27
2023-04-19T12:08:27
Tylersuard
[]
### Describe the bug I am trying to load my own custom text dataset using the load_dataset function. My dataset is a bunch of ordered text, think along the lines of shakespeare plays. However, after I load the dataset and I inspect it, the dataset is a table with a bunch of latitude and longitude values! What in th...
false
1,662,919,811
https://api.github.com/repos/huggingface/datasets/issues/5737
https://github.com/huggingface/datasets/issues/5737
5,737
ClassLabel Error
closed
2
2023-04-11T17:14:13
2023-04-13T16:49:57
2023-04-13T16:49:57
mrcaelumn
[]
### Describe the bug I still getting the error "call() takes 1 positional argument but 2 were given" even after ensuring that the value being passed to the label object is a single value and that the ClassLabel object has been created with the correct number of label classes ### Steps to reproduce the bug from...
false
1,662,286,061
https://api.github.com/repos/huggingface/datasets/issues/5736
https://github.com/huggingface/datasets/issues/5736
5,736
FORCE_REDOWNLOAD raises "Directory not empty" exception on second run
open
3
2023-04-11T11:29:15
2023-11-30T07:16:58
null
rcasero
[]
### Describe the bug Running `load_dataset(..., download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` twice raises a `Directory not empty` exception on the second run. ### Steps to reproduce the bug I cannot test this on datasets v2.11.0 due to #5711, but this happens in v2.10.1. 1. Set up a script `my_dataset.p...
false
1,662,150,903
https://api.github.com/repos/huggingface/datasets/issues/5735
https://github.com/huggingface/datasets/pull/5735
5,735
Implement sharding on merged iterable datasets
closed
11
2023-04-11T10:02:25
2023-04-27T16:39:04
2023-04-27T16:32:09
bruno-hays
[]
This PR allows sharding of merged iterable datasets. Merged iterable datasets with for instance the `interleave_datasets` command are comprised of multiple sub-iterable, one for each dataset that has been merged. With this PR, sharding a merged iterable will result in multiple merged datasets each comprised of sh...
true
1,662,058,028
https://api.github.com/repos/huggingface/datasets/issues/5734
https://github.com/huggingface/datasets/issues/5734
5,734
Remove temporary pin of fsspec
closed
0
2023-04-11T09:04:17
2023-04-11T11:04:52
2023-04-11T11:04:52
albertvillanova
[ "bug" ]
Once root cause is found and fixed, remove the temporary pin introduced by: - #5731
false
1,662,039,191
https://api.github.com/repos/huggingface/datasets/issues/5733
https://github.com/huggingface/datasets/pull/5733
5,733
Unpin fsspec
closed
2
2023-04-11T08:52:12
2023-04-11T11:11:45
2023-04-11T11:04:51
albertvillanova
[]
In `fsspec--2023.4.0` default value for clobber when registering an implementation was changed from True to False. See: - https://github.com/fsspec/filesystem_spec/pull/1237 This PR recovers previous behavior by passing clobber True when registering mock implementations. This PR also removes the temporary pin in...
true
1,662,020,571
https://api.github.com/repos/huggingface/datasets/issues/5732
https://github.com/huggingface/datasets/issues/5732
5,732
Enwik8 should support the standard split
closed
2
2023-04-11T08:38:53
2023-04-11T09:28:17
2023-04-11T09:28:16
lucaslingle
[ "enhancement" ]
### Feature request The HuggingFace Datasets library currently supports two BuilderConfigs for Enwik8. One config yields individual lines as examples, while the other config yields the entire dataset as a single example. Both support only a monolithic split: it is all grouped as "train". The HuggingFace Datasets l...
false
1,662,012,913
https://api.github.com/repos/huggingface/datasets/issues/5731
https://github.com/huggingface/datasets/pull/5731
5,731
Temporarily pin fsspec
closed
2
2023-04-11T08:33:15
2023-04-11T08:57:45
2023-04-11T08:47:55
albertvillanova
[]
Fix #5730.
true
1,662,007,926
https://api.github.com/repos/huggingface/datasets/issues/5730
https://github.com/huggingface/datasets/issues/5730
5,730
CI is broken: ValueError: Name (mock) already in the registry and clobber is False
closed
0
2023-04-11T08:29:46
2023-04-11T08:47:56
2023-04-11T08:47:56
albertvillanova
[ "bug" ]
CI is broken for `test_py310`. See: https://github.com/huggingface/datasets/actions/runs/4665326892/jobs/8258580948 ``` =========================== short test summary info ============================ ERROR tests/test_builder.py::test_builder_with_filesystem_download_and_prepare - ValueError: Name (mock) already ...
false
1,661,929,923
https://api.github.com/repos/huggingface/datasets/issues/5729
https://github.com/huggingface/datasets/pull/5729
5,729
Fix nondeterministic sharded data split order
closed
3
2023-04-11T07:34:20
2023-04-26T15:12:25
2023-04-26T15:05:12
albertvillanova
[]
This PR makes the order of the split names deterministic. Before it was nondeterministic because we were iterating over `set` elements. Fix #5728.
true
1,661,925,932
https://api.github.com/repos/huggingface/datasets/issues/5728
https://github.com/huggingface/datasets/issues/5728
5,728
The order of data split names is nondeterministic
closed
0
2023-04-11T07:31:25
2023-04-26T15:05:13
2023-04-26T15:05:13
albertvillanova
[ "bug" ]
After this CI error: https://github.com/huggingface/datasets/actions/runs/4639528358/jobs/8210492953?pr=5718 ``` FAILED tests/test_data_files.py::test_get_data_files_patterns[data_file_per_split4] - AssertionError: assert ['random', 'train'] == ['train', 'random'] At index 0 diff: 'random' != 'train' Full diff:...
false
1,661,536,363
https://api.github.com/repos/huggingface/datasets/issues/5727
https://github.com/huggingface/datasets/issues/5727
5,727
load_dataset fails with FileNotFound error on Windows
closed
4
2023-04-10T23:21:12
2023-07-21T14:08:20
2023-07-21T14:08:19
joelkowalewski
[]
### Describe the bug Although I can import and run the datasets library in a Colab environment, I cannot successfully load any data on my own machine (Windows 10) despite following the install steps: (1) create conda environment (2) activate environment (3) install with: ``conda` install -c huggingface -c conda-...
false
1,660,944,807
https://api.github.com/repos/huggingface/datasets/issues/5726
https://github.com/huggingface/datasets/issues/5726
5,726
Fallback JSON Dataset loading does not load all values when features specified manually
closed
1
2023-04-10T15:22:14
2023-04-21T06:35:28
2023-04-21T06:35:28
myluki2000
[]
### Describe the bug The fallback JSON dataset loader located here: https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L130-L153 does not load the values of features correctly when features are specified manually and not all features...
false
1,660,455,202
https://api.github.com/repos/huggingface/datasets/issues/5725
https://github.com/huggingface/datasets/issues/5725
5,725
How to limit the number of examples in dataset, for testing?
closed
3
2023-04-10T08:41:43
2023-04-21T06:16:24
2023-04-21T06:16:24
ndvbd
[]
### Describe the bug I am using this command: `data = load_dataset("json", data_files=data_path)` However, I want to add a parameter, to limit the number of loaded examples to be 10, for development purposes, but can't find this simple parameter. ### Steps to reproduce the bug In the description. ### Expected beh...
false
1,659,938,135
https://api.github.com/repos/huggingface/datasets/issues/5724
https://github.com/huggingface/datasets/issues/5724
5,724
Error after shuffling streaming IterableDatasets with downloaded dataset
closed
1
2023-04-09T16:58:44
2023-04-20T20:37:30
2023-04-20T20:37:30
szxiangjn
[]
### Describe the bug I downloaded the C4 dataset, and used streaming IterableDatasets to read it. Everything went normal until I used `dataset = dataset.shuffle(seed=42, buffer_size=10_000)` to shuffle the dataset. Shuffled dataset will throw the following error when it is used by `next(iter(dataset))`: ``` File "/d...
false
1,659,837,510
https://api.github.com/repos/huggingface/datasets/issues/5722
https://github.com/huggingface/datasets/issues/5722
5,722
Distributed Training Error on Customized Dataset
closed
1
2023-04-09T11:04:59
2023-07-24T14:50:46
2023-07-24T14:50:46
wlhgtc
[]
Hi guys, recently I tried to use `datasets` to train a dual encoder. I finish my own datasets according to the nice [tutorial](https://huggingface.co/docs/datasets/v2.11.0/en/dataset_script) Here are my code: ```python class RetrivalDataset(datasets.GeneratorBasedBuilder): """CrossEncoder dataset.""" B...
false
1,659,680,682
https://api.github.com/repos/huggingface/datasets/issues/5721
https://github.com/huggingface/datasets/issues/5721
5,721
Calling datasets.load_dataset("text" ...) results in a wrong split.
open
0
2023-04-08T23:55:12
2023-04-08T23:55:12
null
cyrilzakka
[]
### Describe the bug When creating a text dataset, the training split should have the bulk of the examples by default. Currently, testing does. ### Steps to reproduce the bug I have a folder with 18K text files in it. Each text file essentially consists in a document or article scraped from online. Calling the follo...
false
1,659,610,705
https://api.github.com/repos/huggingface/datasets/issues/5720
https://github.com/huggingface/datasets/issues/5720
5,720
Streaming IterableDatasets do not work with torch DataLoaders
open
10
2023-04-08T18:45:48
2025-03-19T14:06:47
null
jlehrer1
[]
### Describe the bug When using streaming datasets set up with train/val split using `.skip()` and `.take()`, the following error occurs when iterating over a torch dataloader: ``` File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 363, in __iter__ self....
false
1,659,203,222
https://api.github.com/repos/huggingface/datasets/issues/5719
https://github.com/huggingface/datasets/issues/5719
5,719
Array2D feature creates a list of list instead of a numpy array
closed
4
2023-04-07T21:04:08
2023-04-20T15:34:41
2023-04-20T15:34:41
offchan42
[]
### Describe the bug I'm not sure if this is expected behavior or not. When I create a 2D array using `Array2D`, the data has list type instead of numpy array. I think it should not be the expected behavior especially when I feed a numpy array as input to the data creation function. Why is it converting my array int...
false
1,658,958,406
https://api.github.com/repos/huggingface/datasets/issues/5718
https://github.com/huggingface/datasets/pull/5718
5,718
Reorder default data splits to have validation before test
closed
3
2023-04-07T16:01:26
2023-04-27T14:43:13
2023-04-27T14:35:52
albertvillanova
[]
This PR reorders data splits, so that by default validation appears before test. The default order becomes: [train, validation, test] instead of [train, test, validation].
true
1,658,729,866
https://api.github.com/repos/huggingface/datasets/issues/5717
https://github.com/huggingface/datasets/issues/5717
5,717
Errror when saving to disk a dataset of images
open
22
2023-04-07T11:59:17
2025-07-13T08:27:47
null
jplu
[]
### Describe the bug Hello! I have an issue when I try to save on disk my dataset of images. The error I get is: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1442, in save_...
false
1,658,613,092
https://api.github.com/repos/huggingface/datasets/issues/5716
https://github.com/huggingface/datasets/issues/5716
5,716
Handle empty audio
closed
2
2023-04-07T09:51:40
2023-09-27T17:47:08
2023-09-27T17:47:08
ben-8543
[]
Some audio paths exist, but they are empty, and an error will be reported when reading the audio path.How to use the filter function to avoid the empty audio path? when a audio is empty, when do resample , it will break: `array, sampling_rate = sf.read(f) array = librosa.resample(array, orig_sr=sampling_rate, target_...
false
1,657,479,788
https://api.github.com/repos/huggingface/datasets/issues/5715
https://github.com/huggingface/datasets/issues/5715
5,715
Return Numpy Array (fixed length) Mode, in __get_item__, Instead of List
closed
1
2023-04-06T13:57:48
2023-04-20T17:16:26
2023-04-20T17:16:26
jungbaepark
[ "enhancement" ]
### Feature request There are old known issues, but they can be easily forgettable problems in multiprocessing with pytorch-dataloader: Too high usage of RAM or shared-memory in pytorch when we set num workers > 1 and returning type of dataset or dataloader is "List" or "Dict". https://github.com/pytorch/pytorch...
false
1,657,388,033
https://api.github.com/repos/huggingface/datasets/issues/5714
https://github.com/huggingface/datasets/pull/5714
5,714
Fix xnumpy_load for .npz files
closed
2
2023-04-06T13:01:45
2023-04-07T09:23:54
2023-04-07T09:16:57
albertvillanova
[]
PR: - #5626 implemented support for streaming `.npy` files by using `numpy.load`. However, it introduced a bug when used with `.npz` files, within a context manager: ``` ValueError: seek of closed file ``` or in streaming mode: ``` ValueError: I/O operation on closed file. ``` This PR fixes the bug an...
true
1,657,141,251
https://api.github.com/repos/huggingface/datasets/issues/5713
https://github.com/huggingface/datasets/issues/5713
5,713
ArrowNotImplementedError when loading dataset from the hub
closed
2
2023-04-06T10:27:22
2023-04-06T13:06:22
2023-04-06T13:06:21
jplu
[]
### Describe the bug Hello, I have created a dataset by using the image loader. Once the dataset is created I try to download it and I get the error: ``` Traceback (most recent call last): File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1860, in _prepare_split_...
false
1,655,972,106
https://api.github.com/repos/huggingface/datasets/issues/5712
https://github.com/huggingface/datasets/issues/5712
5,712
load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load()
closed
2
2023-04-05T16:47:10
2023-04-06T08:32:37
2023-04-05T17:17:44
rcasero
[]
### Describe the bug Hi, I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1. ```python ds = datasets.load_dataset(path=dataset_dir, name=configuration, data_dir=dataset_dir, ...
false
1,655,971,647
https://api.github.com/repos/huggingface/datasets/issues/5711
https://github.com/huggingface/datasets/issues/5711
5,711
load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load()
closed
2
2023-04-05T16:46:49
2023-04-07T09:16:59
2023-04-07T09:16:59
rcasero
[]
### Describe the bug Hi, I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1. ```python ds = datasets.load_dataset(path=dataset_dir, name=configuration, data_dir=dataset_dir, ...
false
1,655,703,534
https://api.github.com/repos/huggingface/datasets/issues/5710
https://github.com/huggingface/datasets/issues/5710
5,710
OSError: Memory mapping file failed: Cannot allocate memory
closed
1
2023-04-05T14:11:26
2023-04-20T17:16:40
2023-04-20T17:16:40
Saibo-creator
[]
### Describe the bug Hello, I have a series of datasets each of 5 GB, 600 datasets in total. So together this makes 3TB. When I trying to load all the 600 datasets into memory, I get the above error message. Is this normal because I'm hitting the max size of memory mapping of the OS? Thank you ```te...
false
1,655,423,503
https://api.github.com/repos/huggingface/datasets/issues/5709
https://github.com/huggingface/datasets/issues/5709
5,709
Manually dataset info made not taken into account
closed
2
2023-04-05T11:15:17
2023-04-06T08:52:20
2023-04-06T08:52:19
jplu
[]
### Describe the bug Hello, I'm manually building an image dataset with the `from_dict` approach. I also build the features with the `cast_features` methods. Once the dataset is created I push it on the hub, and a default `dataset_infos.json` file seems to have been automatically added to the repo in same time. Hen...
false
1,655,023,642
https://api.github.com/repos/huggingface/datasets/issues/5708
https://github.com/huggingface/datasets/issues/5708
5,708
Dataset sizes are in MiB instead of MB in dataset cards
closed
12
2023-04-05T06:36:03
2023-12-21T10:20:28
2023-12-21T10:20:27
albertvillanova
[ "bug", "dataset-viewer" ]
As @severo reported in an internal discussion (https://github.com/huggingface/moon-landing/issues/5929): Now we show the dataset size: - from the dataset card (in the side column) - from the datasets-server (in the viewer) But, even if the size is the same, we see a mismatch because the viewer shows MB, while t...
false
1,653,545,835
https://api.github.com/repos/huggingface/datasets/issues/5706
https://github.com/huggingface/datasets/issues/5706
5,706
Support categorical data types for Parquet
closed
17
2023-04-04T09:45:35
2024-06-07T12:20:43
2024-06-07T12:20:43
kklemon
[ "enhancement" ]
### Feature request Huggingface datasets does not seem to support categorical / dictionary data types for Parquet as of now. There seems to be a `TODO` in the code for this feature but no implementation yet. Below you can find sample code to reproduce the error that is currently thrown when attempting to read a Parq...
false
1,653,500,383
https://api.github.com/repos/huggingface/datasets/issues/5705
https://github.com/huggingface/datasets/issues/5705
5,705
Getting next item from IterableDataset took forever.
closed
2
2023-04-04T09:16:17
2023-04-05T23:35:41
2023-04-05T23:35:41
HongtaoYang
[]
### Describe the bug I have a large dataset, about 500GB. The format of the dataset is parquet. I then load the dataset and try to get the first item ```python def get_one_item(): dataset = load_dataset("path/to/datafiles", split="train", cache_dir=".", streaming=True) dataset = dataset.filter(lambda...
false
1,653,471,356
https://api.github.com/repos/huggingface/datasets/issues/5704
https://github.com/huggingface/datasets/pull/5704
5,704
5537 speedup load
open
4
2023-04-04T08:58:14
2023-04-07T16:10:55
null
semajyllek
[]
I reimplemented fsspec.spec.glob() in `hffilesystem.py` as `_glob`, used it in `_resolve_single_pattern_in_dataset_repository` only, and saw a 20% speedup in times to load the config, on average. That's not much when usually this step takes only 2-3 seconds for most datasets, but in this particular case, `bigcode...
true
1,653,158,955
https://api.github.com/repos/huggingface/datasets/issues/5703
https://github.com/huggingface/datasets/pull/5703
5,703
[WIP][Test, Please ignore] Investigate performance impact of using multiprocessing only
closed
4
2023-04-04T04:37:49
2023-04-20T03:17:37
2023-04-20T03:17:32
hvaara
[]
null
true
1,653,104,720
https://api.github.com/repos/huggingface/datasets/issues/5702
https://github.com/huggingface/datasets/issues/5702
5,702
Is it possible or how to define a `datasets.Sequence` that could potentially be either a dict, a str, or None?
closed
4
2023-04-04T03:20:43
2023-04-05T14:15:18
2023-04-05T14:15:17
gitforziio
[ "enhancement" ]
### Feature request Hello! Apologies if my question sounds naive: I was wondering if it’s possible, or how one would go about defining a 'datasets.Sequence' element in datasets.Features that could potentially be either a dict, a str, or None? Specifically, I’d like to define a feature for a list that contains 18...
false
1,652,931,399
https://api.github.com/repos/huggingface/datasets/issues/5701
https://github.com/huggingface/datasets/pull/5701
5,701
Add Dataset.from_spark
closed
21
2023-04-03T23:51:29
2023-06-16T16:39:32
2023-04-26T15:43:39
maddiedawson
[]
Adds static method Dataset.from_spark to create datasets from Spark DataFrames. This approach alleviates users of the need to materialize their dataframe---a common use case is that the user loads their dataset into a dataframe, uses Spark to apply some transformation to some of the columns, and then wants to train ...
true
1,652,527,530
https://api.github.com/repos/huggingface/datasets/issues/5700
https://github.com/huggingface/datasets/pull/5700
5,700
fix: fix wrong modification of the 'cache_file_name' -related paramet…
open
7
2023-04-03T18:05:26
2023-04-06T17:17:27
null
FrancoisNoyez
[]
…ers values in 'train_test_split' + fix bad interaction between 'keep_in_memory' and 'cache_file_name' -related parameters (#5699)
true
1,652,437,419
https://api.github.com/repos/huggingface/datasets/issues/5699
https://github.com/huggingface/datasets/issues/5699
5,699
Issue when wanting to split in memory a cached dataset
open
2
2023-04-03T17:00:07
2024-05-15T13:12:18
null
FrancoisNoyez
[]
### Describe the bug **In the 'train_test_split' method of the Dataset class** (defined datasets/arrow_dataset.py), **if 'self.cache_files' is not empty**, then, **regarding the input parameters 'train_indices_cache_file_name' and 'test_indices_cache_file_name', if they are None**, we modify them to make them not No...
false
1,652,183,611
https://api.github.com/repos/huggingface/datasets/issues/5698
https://github.com/huggingface/datasets/issues/5698
5,698
Add Qdrant as another search index
open
1
2023-04-03T14:25:19
2023-04-11T10:28:40
null
kacperlukawski
[ "enhancement" ]
### Feature request I'd suggest adding Qdrant (https://qdrant.tech) as another search index available, so users can directly build an index from a dataset. Currently, FAISS and ElasticSearch are only supported: https://huggingface.co/docs/datasets/faiss_es ### Motivation ElasticSearch is a keyword-based search syst...
false
1,651,812,614
https://api.github.com/repos/huggingface/datasets/issues/5697
https://github.com/huggingface/datasets/pull/5697
5,697
Raise an error on missing distributed seed
closed
4
2023-04-03T10:44:58
2023-04-04T15:05:24
2023-04-04T14:58:16
lhoestq
[]
close https://github.com/huggingface/datasets/issues/5696
true
1,651,707,008
https://api.github.com/repos/huggingface/datasets/issues/5696
https://github.com/huggingface/datasets/issues/5696
5,696
Shuffle a sharded iterable dataset without seed can lead to duplicate data
closed
0
2023-04-03T09:40:03
2023-04-04T14:58:18
2023-04-04T14:58:18
lhoestq
[ "bug" ]
As reported in https://github.com/huggingface/datasets/issues/5360 If `seed=None` in `.shuffle()`, shuffled datasets don't use the same shuffling seed across nodes. Because of that, the lists of shards is not shuffled the same way across nodes, and therefore some shards may be assigned to multiple nodes instead o...
false
1,650,974,156
https://api.github.com/repos/huggingface/datasets/issues/5695
https://github.com/huggingface/datasets/issues/5695
5,695
Loading big dataset raises pyarrow.lib.ArrowNotImplementedError
closed
7
2023-04-02T14:42:44
2024-05-15T12:04:47
2023-04-10T08:04:04
amariucaitheodor
[]
### Describe the bug Calling `datasets.load_dataset` to load the (publicly available) dataset `theodor1289/wit` fails with `pyarrow.lib.ArrowNotImplementedError`. ### Steps to reproduce the bug Steps to reproduce this behavior: 1. `!pip install datasets` 2. `!huggingface-cli login` 3. This step will throw the e...
false
1,650,467,793
https://api.github.com/repos/huggingface/datasets/issues/5694
https://github.com/huggingface/datasets/issues/5694
5,694
Dataset configuration
open
3
2023-04-01T13:08:05
2023-04-04T14:54:37
null
lhoestq
[ "generic discussion" ]
Following discussions from https://github.com/huggingface/datasets/pull/5331 We could have something like `config.json` to define the configuration of a dataset. ```json { "data_dir": "data" "data_files": { "train": "train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*" } } ``` ...
false
1,649,934,749
https://api.github.com/repos/huggingface/datasets/issues/5693
https://github.com/huggingface/datasets/pull/5693
5,693
[docs] Split pattern search order
closed
2
2023-03-31T19:51:38
2023-04-03T18:43:30
2023-04-03T18:29:58
stevhliu
[]
This PR addresses #5681 about the order of split patterns 🤗 Datasets searches for when generating dataset splits.
true
1,649,818,644
https://api.github.com/repos/huggingface/datasets/issues/5692
https://github.com/huggingface/datasets/issues/5692
5,692
pyarrow.lib.ArrowInvalid: Unable to merge: Field <field> has incompatible types
open
6
2023-03-31T18:19:40
2024-01-14T07:24:21
null
cyanic-selkie
[]
### Describe the bug When loading the dataset [wikianc-en](https://huggingface.co/datasets/cyanic-selkie/wikianc-en) which I created using [this](https://github.com/cyanic-selkie/wikianc) code, I get the following error: ``` Traceback (most recent call last): File "/home/sven/code/rector/answer-detection/trai...
false
1,649,737,526
https://api.github.com/repos/huggingface/datasets/issues/5691
https://github.com/huggingface/datasets/pull/5691
5,691
[docs] Compress data files
closed
3
2023-03-31T17:17:26
2023-04-19T13:37:32
2023-04-19T07:25:58
stevhliu
[]
This PR addresses the comments in #5687 about compressing text file extensions before uploading to the Hub. Also clarified what "too large" means based on the GitLFS [docs](https://docs.github.com/en/repositories/working-with-files/managing-large-files/about-git-large-file-storage).
true
1,648,956,349
https://api.github.com/repos/huggingface/datasets/issues/5689
https://github.com/huggingface/datasets/pull/5689
5,689
Support streaming Beam datasets from HF GCS preprocessed data
closed
4
2023-03-31T08:44:24
2023-04-12T05:57:55
2023-04-12T05:50:31
albertvillanova
[]
This PR implements streaming Apache Beam datasets that are already preprocessed by us and stored in the HF Google Cloud Storage: - natural_questions - wiki40b - wikipedia This is done by streaming from the prepared Arrow files in HF Google Cloud Storage. This will fix their corresponding dataset viewers. Relat...
true
1,649,289,883
https://api.github.com/repos/huggingface/datasets/issues/5690
https://github.com/huggingface/datasets/issues/5690
5,690
raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api
closed
5
2023-03-31T08:22:22
2023-07-21T14:21:57
2023-07-21T14:21:57
wccccp
[ "bug" ]
### Describe the bug rta.sh Traceback (most recent call last): File "run.py", line 7, in <module> import datasets File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module> from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, Dat...
false
1,648,463,504
https://api.github.com/repos/huggingface/datasets/issues/5688
https://github.com/huggingface/datasets/issues/5688
5,688
Wikipedia download_and_prepare for GCS
closed
3
2023-03-30T23:43:22
2024-03-15T15:59:18
2024-03-15T15:59:18
adrianfagerland
[]
### Describe the bug I am unable to download the wikipedia dataset onto GCS. When I run the script provided the memory firstly gets eaten up, then it crashes. I tried running this on a VM with 128GB RAM and all I got was a two empty files: _data_builder.lock_, _data.incomplete/beam-temp-wikipedia-train-1ab2039a...
false
1,647,009,018
https://api.github.com/repos/huggingface/datasets/issues/5687
https://github.com/huggingface/datasets/issues/5687
5,687
Document to compress data files before uploading
closed
3
2023-03-30T06:41:07
2023-04-19T07:25:59
2023-04-19T07:25:59
albertvillanova
[ "documentation" ]
In our docs to [Share a dataset to the Hub](https://huggingface.co/docs/datasets/upload_dataset), we tell users to upload directly their data files, like CSV, JSON, JSON-Lines, text,... However, these extensions are not tracked by Git LFS by default, as they are not in the `.giattributes` file. Therefore, if they are t...
false
1,646,308,228
https://api.github.com/repos/huggingface/datasets/issues/5686
https://github.com/huggingface/datasets/pull/5686
5,686
set dev version
closed
3
2023-03-29T18:24:13
2023-03-29T18:33:49
2023-03-29T18:24:22
lhoestq
[]
null
true
1,646,048,667
https://api.github.com/repos/huggingface/datasets/issues/5685
https://github.com/huggingface/datasets/issues/5685
5,685
Broken Image render on the hub website
closed
3
2023-03-29T15:25:30
2023-03-30T07:54:25
2023-03-30T07:54:25
FrancescoSaverioZuppichini
[]
### Describe the bug Hi :wave: Not sure if this is the right place to ask, but I am trying to load a huge amount of datasets on the hub (:partying_face: ) but I am facing a little issue with the `image` type ![image](https://user-images.githubusercontent.com/15908060/228587875-427a37f1-3a31-4e17-8bbe-0f75900391...
false
1,646,013,226
https://api.github.com/repos/huggingface/datasets/issues/5684
https://github.com/huggingface/datasets/pull/5684
5,684
Release: 2.11.0
closed
5
2023-03-29T15:06:07
2023-03-29T18:30:34
2023-03-29T18:15:54
lhoestq
[]
null
true
1,646,001,197
https://api.github.com/repos/huggingface/datasets/issues/5683
https://github.com/huggingface/datasets/pull/5683
5,683
Fix verification_mode when ignore_verifications is passed
closed
2
2023-03-29T15:00:50
2023-03-29T17:36:06
2023-03-29T17:28:57
albertvillanova
[]
This PR fixes the values assigned to `verification_mode` when passing `ignore_verifications` to `load_dataset`. Related to: - #5303 Fix #5682.
true
1,646,000,571
https://api.github.com/repos/huggingface/datasets/issues/5682
https://github.com/huggingface/datasets/issues/5682
5,682
ValueError when passing ignore_verifications
closed
0
2023-03-29T15:00:30
2023-03-29T17:28:58
2023-03-29T17:28:58
albertvillanova
[ "bug" ]
When passing `ignore_verifications=True` to `load_dataset`, we get a ValueError: ``` ValueError: 'none' is not a valid VerificationMode ```
false
1,645,630,784
https://api.github.com/repos/huggingface/datasets/issues/5681
https://github.com/huggingface/datasets/issues/5681
5,681
Add information about patterns search order to the doc about structuring repo
closed
2
2023-03-29T11:44:49
2023-04-03T18:31:11
2023-04-03T18:31:11
polinaeterna
[ "documentation" ]
Following [this](https://github.com/huggingface/datasets/issues/5650) issue I think we should add a note about the order of patterns that is used to find splits, see [my comment](https://github.com/huggingface/datasets/issues/5650#issuecomment-1488412527). Also we should reference this page in pages about packaged load...
false
1,645,430,103
https://api.github.com/repos/huggingface/datasets/issues/5680
https://github.com/huggingface/datasets/pull/5680
5,680
Fix a description error for interleave_datasets.
closed
3
2023-03-29T09:50:23
2023-03-30T13:14:19
2023-03-30T13:07:18
QizhiPei
[]
There is a description mistake in the annotation of interleave_dataset with "all_exhausted" stopping_strategy. ``` python d1 = Dataset.from_dict({"a": [0, 1, 2]}) d2 = Dataset.from_dict({"a": [10, 11, 12, 13]}) d3 = Dataset.from_dict({"a": [20, 21, 22, 23, 24]}) dataset = interleave_datasets([d1, d2, d3], stopping...
true
1,645,184,622
https://api.github.com/repos/huggingface/datasets/issues/5679
https://github.com/huggingface/datasets/issues/5679
5,679
Allow load_dataset to take a working dir for intermediate data
open
4
2023-03-29T07:21:09
2023-04-12T22:30:25
null
lu-wang-dl
[ "enhancement" ]
### Feature request As a user, I can set a working dir for intermediate data creation. The processed files will be moved to the cache dir, like ``` load_dataset(…, working_dir=”/temp/dir”, cache_dir=”/cloud_dir”). ``` ### Motivation This will help the use case for using datasets with cloud storage as cache. It wi...
false
1,645,018,359
https://api.github.com/repos/huggingface/datasets/issues/5678
https://github.com/huggingface/datasets/issues/5678
5,678
Add support to create a Dataset from spark dataframe
closed
5
2023-03-29T04:36:28
2024-08-27T14:43:19
2023-07-21T14:15:38
lu-wang-dl
[ "enhancement" ]
### Feature request Add a new API `Dataset.from_spark` to create a Dataset from Spark DataFrame. ### Motivation Spark is a distributed computing framework that can handle large datasets. By supporting loading Spark DataFrames directly into Hugging Face Datasets, we enable take the advantages of spark to processing t...
false
1,644,828,606
https://api.github.com/repos/huggingface/datasets/issues/5677
https://github.com/huggingface/datasets/issues/5677
5,677
Dataset.map() crashes when any column contains more than 1000 empty dictionaries
closed
0
2023-03-29T00:01:31
2023-07-07T14:01:14
2023-07-07T14:01:14
mtoles
[]
### Describe the bug `Dataset.map()` crashes any time any column contains more than `writer_batch_size` (default 1000) empty dictionaries, regardless of whether the column is being operated on. The error does not occur if the dictionaries are non-empty. ### Steps to reproduce the bug Example: ``` import datasets...
false
1,641,763,478
https://api.github.com/repos/huggingface/datasets/issues/5675
https://github.com/huggingface/datasets/issues/5675
5,675
Filter datasets by language code
closed
4
2023-03-27T09:42:28
2023-03-30T08:08:15
2023-03-30T08:08:15
named-entity
[]
Hi! I use the language search field on https://huggingface.co/datasets However, some of the datasets tagged by ISO language code are not accessible by this search form. For example, [myv_ru_2022](https://huggingface.co/datasets/slone/myv_ru_2022) is has `myv` language tag but it is not included in Languages search fo...
false
1,641,084,105
https://api.github.com/repos/huggingface/datasets/issues/5674
https://github.com/huggingface/datasets/issues/5674
5,674
Stored XSS
closed
1
2023-03-26T20:55:58
2024-04-30T22:56:41
2023-03-27T21:01:55
Fadavvi
[]
x
false
1,641,066,352
https://api.github.com/repos/huggingface/datasets/issues/5673
https://github.com/huggingface/datasets/pull/5673
5,673
Pass down storage options
closed
5
2023-03-26T20:09:37
2023-03-28T15:03:38
2023-03-28T14:54:17
dwyatte
[]
Remove implementation-specific kwargs from `file_utils.fsspec_get` and `file_utils.fsspec_head`, instead allowing them to be passed down via `storage_options`. This fixes an issue where s3fs did not recognize a timeout arg as well as fixes an issue mentioned in https://github.com/huggingface/datasets/issues/5281 by all...
true
1,641,005,322
https://api.github.com/repos/huggingface/datasets/issues/5672
https://github.com/huggingface/datasets/issues/5672
5,672
Pushing dataset to hub crash
closed
3
2023-03-26T17:42:13
2023-03-30T08:11:05
2023-03-30T08:11:05
tzvc
[]
### Describe the bug Uploading a dataset with `push_to_hub()` fails without error description. ### Steps to reproduce the bug Hey there, I've built a image dataset of 100k images + text pair as described here https://huggingface.co/docs/datasets/image_dataset#imagefolder Now I'm trying to push it to the hub b...
false
1,640,840,012
https://api.github.com/repos/huggingface/datasets/issues/5671
https://github.com/huggingface/datasets/issues/5671
5,671
How to use `load_dataset('glue', 'cola')`
closed
2
2023-03-26T09:40:34
2023-03-28T07:43:44
2023-03-28T07:43:43
makinzm
[]
### Describe the bug I'm new to use HuggingFace datasets but I cannot use `load_dataset('glue', 'cola')`. - I was stacked by the following problem: ```python from datasets import load_dataset cola_dataset = load_dataset('glue', 'cola') ------------------------------------------------------------------------...
false
1,640,607,045
https://api.github.com/repos/huggingface/datasets/issues/5670
https://github.com/huggingface/datasets/issues/5670
5,670
Unable to load multi class classification datasets
closed
2
2023-03-25T18:06:15
2023-03-27T22:54:56
2023-03-27T22:54:56
ysahil97
[]
### Describe the bug I've been playing around with huggingface library, mostly with `datasets` and wanted to download the multi class classification datasets to fine tune BERT on this task. ([link](https://huggingface.co/docs/transformers/training#train-with-pytorch-trainer)). While loading the dataset, I'm getting...
false
1,638,070,046
https://api.github.com/repos/huggingface/datasets/issues/5669
https://github.com/huggingface/datasets/issues/5669
5,669
Almost identical datasets, huge performance difference
open
7
2023-03-23T18:20:20
2023-04-09T18:56:23
null
eli-osherovich
[]
### Describe the bug I am struggling to understand (huge) performance difference between two datasets that are almost identical. ### Steps to reproduce the bug # Fast (normal) dataset speed: ```python import cv2 from datasets import load_dataset from torch.utils.data import DataLoader dataset = load_dataset(...
false
1,638,018,598
https://api.github.com/repos/huggingface/datasets/issues/5668
https://github.com/huggingface/datasets/pull/5668
5,668
Support for downloading only provided split
open
2
2023-03-23T17:53:39
2023-03-24T06:43:14
null
polinaeterna
[]
We can pass split to `_split_generators()`. But I'm not sure if it's possible to solve cache issues, mostly with `dataset_info.json`
true
1,637,789,361
https://api.github.com/repos/huggingface/datasets/issues/5667
https://github.com/huggingface/datasets/pull/5667
5,667
Jax requires jaxlib
closed
6
2023-03-23T15:41:09
2023-03-23T16:23:11
2023-03-23T16:14:52
lhoestq
[]
close https://github.com/huggingface/datasets/issues/5666
true
1,637,675,062
https://api.github.com/repos/huggingface/datasets/issues/5666
https://github.com/huggingface/datasets/issues/5666
5,666
Support tensorflow 2.12.0 in CI
closed
0
2023-03-23T14:37:51
2023-03-23T16:14:54
2023-03-23T16:14:54
albertvillanova
[ "enhancement" ]
Once we find out the root cause of: - #5663 we should revert the temporary pin on tensorflow introduced by: - #5664
false
1,637,193,648
https://api.github.com/repos/huggingface/datasets/issues/5665
https://github.com/huggingface/datasets/issues/5665
5,665
Feature request: IterableDataset.push_to_hub
closed
13
2023-03-23T09:53:04
2025-06-06T16:13:22
2025-06-06T16:12:36
NielsRogge
[ "enhancement" ]
### Feature request It'd be great to have a lazy push to hub, similar to the lazy loading we have with `IterableDataset`. Suppose you'd like to filter [LAION](https://huggingface.co/datasets/laion/laion400m) based on certain conditions, but as LAION doesn't fit into your disk, you'd like to leverage streaming: `...
false
1,637,192,684
https://api.github.com/repos/huggingface/datasets/issues/5664
https://github.com/huggingface/datasets/pull/5664
5,664
Fix CI by temporarily pinning tensorflow < 2.12.0
closed
2
2023-03-23T09:52:26
2023-03-23T10:17:11
2023-03-23T10:09:54
albertvillanova
[]
As a hotfix for our CI, temporarily pin `tensorflow` upper version: - In Python 3.10, tensorflow-2.12.0 also installs `jax` Fix #5663 Until root cause is fixed.
true
1,637,173,248
https://api.github.com/repos/huggingface/datasets/issues/5663
https://github.com/huggingface/datasets/issues/5663
5,663
CI is broken: ModuleNotFoundError: jax requires jaxlib to be installed
closed
0
2023-03-23T09:39:43
2023-03-23T10:09:55
2023-03-23T10:09:55
albertvillanova
[ "bug" ]
CI test_py310 is broken: see https://github.com/huggingface/datasets/actions/runs/4498945505/jobs/7916194236?pr=5662 ``` FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_in_memory - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installati...
false
1,637,140,813
https://api.github.com/repos/huggingface/datasets/issues/5662
https://github.com/huggingface/datasets/pull/5662
5,662
Fix unnecessary dict comprehension
closed
3
2023-03-23T09:18:58
2023-03-23T09:46:59
2023-03-23T09:37:49
albertvillanova
[]
After ruff-0.0.258 release, the C416 rule was updated with unnecessary dict comprehensions. See: - https://github.com/charliermarsh/ruff/releases/tag/v0.0.258 - https://github.com/charliermarsh/ruff/pull/3605 This PR fixes one unnecessary dict comprehension in our code: no need to unpack and re-pack the tuple valu...
true
1,637,129,445
https://api.github.com/repos/huggingface/datasets/issues/5661
https://github.com/huggingface/datasets/issues/5661
5,661
CI is broken: Unnecessary `dict` comprehension
closed
0
2023-03-23T09:13:01
2023-03-23T09:37:51
2023-03-23T09:37:51
albertvillanova
[ "bug" ]
CI check_code_quality is broken: ``` src/datasets/arrow_dataset.py:3267:35: C416 [*] Unnecessary `dict` comprehension (rewrite using `dict()`) Found 1 error. ```
false
1,635,543,646
https://api.github.com/repos/huggingface/datasets/issues/5660
https://github.com/huggingface/datasets/issues/5660
5,660
integration with imbalanced-learn
closed
1
2023-03-22T11:05:17
2023-07-06T18:10:15
2023-07-06T18:10:15
tansaku
[ "enhancement", "wontfix" ]
### Feature request Wouldn't it be great if the various class balancing operations from imbalanced-learn were available as part of datasets? ### Motivation I'm trying to use imbalanced-learn to balance a dataset, but it's not clear how to get the two to interoperate - what would be great would be some examples. I'v...
false
1,635,447,540
https://api.github.com/repos/huggingface/datasets/issues/5659
https://github.com/huggingface/datasets/issues/5659
5,659
[Audio] Soundfile/libsndfile requirements too stringent for decoding mp3 files
closed
13
2023-03-22T10:07:33
2024-07-12T01:35:01
2023-04-07T08:51:28
sanchit-gandhi
[]
### Describe the bug I'm encountering several issues trying to load mp3 audio files using `datasets` on a TPU v4. The PR https://github.com/huggingface/datasets/pull/5573 updated the audio loading logic to rely solely on the `soundfile`/`libsndfile` libraries for loading audio samples, regardless of their file t...
false
1,634,867,204
https://api.github.com/repos/huggingface/datasets/issues/5658
https://github.com/huggingface/datasets/pull/5658
5,658
docs: Update num_shards docs to mention num_proc on Dataset and DatasetDict
closed
2
2023-03-22T00:12:18
2023-03-24T16:43:34
2023-03-24T16:36:21
connor-henderson
[]
Closes #5653 @mariosasko
true
1,634,156,563
https://api.github.com/repos/huggingface/datasets/issues/5656
https://github.com/huggingface/datasets/pull/5656
5,656
Fix `fsspec.open` when using an HTTP proxy
closed
2
2023-03-21T15:23:29
2023-03-23T14:14:50
2023-03-23T13:15:46
bryant1410
[]
Most HTTP(S) downloads from this library support proxy automatically by reading the `HTTP_PROXY` environment variable (et al.) because `requests` is widely used. However, in some parts of the code, `fsspec` is used, which in turn uses `aiohttp` for HTTP(S) requests (as opposed to `requests`), which in turn doesn't supp...
true
1,634,030,017
https://api.github.com/repos/huggingface/datasets/issues/5655
https://github.com/huggingface/datasets/pull/5655
5,655
Improve features decoding in to_iterable_dataset
closed
4
2023-03-21T14:18:09
2023-03-23T13:19:27
2023-03-23T13:12:25
lhoestq
[]
Following discussion at https://github.com/huggingface/datasets/pull/5589 Right now `to_iterable_dataset` on images/audio hurts iterable dataset performance a lot (e.g. x4 slower because it encodes+decodes images/audios unnecessarily). I fixed it by providing a generator that yields undecoded examples
true
1,633,523,705
https://api.github.com/repos/huggingface/datasets/issues/5654
https://github.com/huggingface/datasets/issues/5654
5,654
Offset overflow when executing Dataset.map
open
2
2023-03-21T09:33:27
2023-03-21T10:32:07
null
jan-pair
[]
### Describe the bug Hi, I'm trying to use `.map` method to cache multiple random crops from the image to speed up data processing during training, as the image size is too big. The map function executes all iterations, and then returns the following error: ```bash Traceback (most recent call last): ...
false
1,633,254,159
https://api.github.com/repos/huggingface/datasets/issues/5653
https://github.com/huggingface/datasets/issues/5653
5,653
Doc: save_to_disk, `num_proc` will affect `num_shards`, but it's not documented
closed
1
2023-03-21T05:25:35
2023-03-24T16:36:23
2023-03-24T16:36:23
RmZeta2718
[ "documentation", "good first issue" ]
### Describe the bug [`num_proc`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_proc) will affect `num_shards`, but it's not documented ### Steps to reproduce the bug Nothing to reproduce ### Expected behavior [document of `num_shards`](https://...
false
1,632,546,073
https://api.github.com/repos/huggingface/datasets/issues/5652
https://github.com/huggingface/datasets/pull/5652
5,652
Copy features
closed
7
2023-03-20T17:17:23
2023-03-23T13:19:19
2023-03-23T13:12:08
lhoestq
[]
Some users (even internally at HF) are doing ```python dset_features = dset.features dset_features.pop(col_to_remove) dset = dset.map(..., features=dset_features) ``` Right now this causes issues because it modifies the features dict in place before the map. In this PR I modified `dset.features` to return a ...
true
1,631,967,509
https://api.github.com/repos/huggingface/datasets/issues/5651
https://github.com/huggingface/datasets/issues/5651
5,651
expanduser in save_to_disk
closed
5
2023-03-20T12:02:18
2023-10-27T14:04:37
2023-10-27T14:04:37
RmZeta2718
[ "good first issue" ]
### Describe the bug save_to_disk() does not expand `~` 1. `dataset = load_datasets("any dataset")` 2. `dataset.save_to_disk("~/data")` 3. a folder named "~" created in current folder 4. FileNotFoundError is raised, because the expanded path does not exist (`/home/<user>/data`) related issue https://github....
false
1,630,336,919
https://api.github.com/repos/huggingface/datasets/issues/5650
https://github.com/huggingface/datasets/issues/5650
5,650
load_dataset can't work correct with my image data
closed
21
2023-03-18T13:59:13
2023-07-24T14:13:02
2023-07-24T14:13:01
WiNE-iNEFF
[]
I have about 20000 images in my folder which divided into 4 folders with class names. When i use load_dataset("my_folder_name", split="train") this function create dataset in which there are only 4 images, the remaining 19000 images were not added there. What is the problem and did not understand. Tried converting imag...
false
1,630,173,460
https://api.github.com/repos/huggingface/datasets/issues/5649
https://github.com/huggingface/datasets/issues/5649
5,649
The index column created with .to_sql() is dependent on the batch_size when writing
closed
2
2023-03-18T05:25:17
2023-06-17T07:01:57
2023-06-17T07:01:57
lsb
[]
### Describe the bug It seems like the "index" column is designed to be unique? The values are only unique per batch. The SQL index is not a unique index. This can be a problem, for instance, when building a faiss index on a dataset and then trying to match up ids with a sql export. ### Steps to reproduce the ...
false
1,629,253,719
https://api.github.com/repos/huggingface/datasets/issues/5648
https://github.com/huggingface/datasets/issues/5648
5,648
flatten_indices doesn't work with pandas format
open
1
2023-03-17T12:44:25
2023-03-21T13:12:03
null
alialamiidrissi
[ "bug" ]
### Describe the bug Hi, I noticed that `flatten_indices` throws an error when the batch format is `pandas`. This is probably due to the fact that flatten_indices uses map internally which doesn't accept dataframes as the transformation function output ### Steps to reproduce the bug tabular_data = pd.DataFrame(np.r...
false