id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
2,144,092,388
https://api.github.com/repos/huggingface/datasets/issues/6684
https://github.com/huggingface/datasets/pull/6684
6,684
Improve error message for gated datasets on load
closed
7
2024-02-20T10:51:27
2024-02-20T15:40:52
2024-02-20T15:33:56
lewtun
[]
Internal Slack discussion: https://huggingface.slack.com/archives/C02V51Q3800/p1708424971135029
true
2,142,751,955
https://api.github.com/repos/huggingface/datasets/issues/6683
https://github.com/huggingface/datasets/pull/6683
6,683
Fix imagefolder dataset url
closed
2
2024-02-19T16:26:51
2024-02-19T17:24:25
2024-02-19T17:18:10
mariosasko
[]
null
true
2,142,000,800
https://api.github.com/repos/huggingface/datasets/issues/6682
https://github.com/huggingface/datasets/pull/6682
6,682
Update GitHub Actions to Node 20
closed
2
2024-02-19T10:10:50
2024-02-28T07:02:40
2024-02-28T06:56:34
albertvillanova
[]
Update GitHub Actions to Node 20. Fix #6679.
true
2,141,985,239
https://api.github.com/repos/huggingface/datasets/issues/6681
https://github.com/huggingface/datasets/pull/6681
6,681
Update release instructions
closed
2
2024-02-19T10:03:08
2024-02-28T07:23:49
2024-02-28T07:17:22
albertvillanova
[ "maintenance" ]
Update release instructions.
true
2,141,979,527
https://api.github.com/repos/huggingface/datasets/issues/6680
https://github.com/huggingface/datasets/pull/6680
6,680
Set dev version
closed
2
2024-02-19T10:00:31
2024-02-19T10:06:43
2024-02-19T10:00:40
albertvillanova
[]
null
true
2,141,953,981
https://api.github.com/repos/huggingface/datasets/issues/6679
https://github.com/huggingface/datasets/issues/6679
6,679
Node.js 16 GitHub Actions are deprecated
closed
0
2024-02-19T09:47:37
2024-02-28T06:56:35
2024-02-28T06:56:35
albertvillanova
[ "maintenance" ]
`Node.js` 16 GitHub Actions are deprecated. See: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/ We should update them to Node 20. See warnings in our CI, e.g.: https://github.com/huggingface/datasets/actions/runs/7957295009?pr=6678 > Node.js 16 actions are deprecat...
false
2,141,902,154
https://api.github.com/repos/huggingface/datasets/issues/6678
https://github.com/huggingface/datasets/pull/6678
6,678
Release: 2.17.1
closed
2
2024-02-19T09:24:29
2024-02-19T10:03:00
2024-02-19T09:56:52
albertvillanova
[]
null
true
2,141,244,167
https://api.github.com/repos/huggingface/datasets/issues/6677
https://github.com/huggingface/datasets/pull/6677
6,677
Pass through information about location of cache directory.
closed
2
2024-02-18T23:48:57
2024-02-28T18:57:39
2024-02-28T18:51:15
stridge-cruxml
[]
If cache directory is set, information is not passed through. Pass download config in as an arg too.
true
2,140,648,619
https://api.github.com/repos/huggingface/datasets/issues/6676
https://github.com/huggingface/datasets/issues/6676
6,676
Can't Read List of JSON Files Properly
open
3
2024-02-17T22:58:15
2024-03-02T20:47:22
null
lordsoffallen
[]
### Describe the bug Trying to read a bunch of JSON files into Dataset class but default approach doesn't work. I don't get why it works when I read it one by one but not when I pass as a list :man_shrugging: The code fails with ``` ArrowInvalid: JSON parse error: Invalid value. in row 0 UnicodeDecodeError...
false
2,139,640,381
https://api.github.com/repos/huggingface/datasets/issues/6675
https://github.com/huggingface/datasets/issues/6675
6,675
Allow image model (color conversion) to be specified as part of datasets Image() decode
closed
1
2024-02-16T23:43:20
2024-03-18T15:41:34
2024-03-18T15:41:34
rwightman
[ "enhancement" ]
### Feature request Typical torchvision / torch Datasets in image applications apply color conversion in the Dataset portion of the code as part of image decode, separately from the image transform stack. This is true for PIL.Image where convert is usually called in dataset, for native torchvision https://pytorch.or...
false
2,139,595,576
https://api.github.com/repos/huggingface/datasets/issues/6674
https://github.com/huggingface/datasets/issues/6674
6,674
Depprcated Overview.ipynb Link to new Quickstart Notebook invalid
closed
1
2024-02-16T22:51:35
2024-02-25T18:48:09
2024-02-25T18:48:09
Codeblockz
[]
### Describe the bug For the dreprecated notebook found [here](https://github.com/huggingface/datasets/blob/main/notebooks/Overview.ipynb). The link to the new notebook is broken. ### Steps to reproduce the bug Click the [Quickstart notebook](https://github.com/huggingface/notebooks/blob/main/datasets_doc/quicksta...
false
2,139,522,827
https://api.github.com/repos/huggingface/datasets/issues/6673
https://github.com/huggingface/datasets/issues/6673
6,673
IterableDataset `set_epoch` is ignored when DataLoader `persistent_workers=True`
closed
0
2024-02-16T21:38:12
2024-07-01T17:45:31
2024-07-01T17:45:31
rwightman
[ "bug", "streaming" ]
### Describe the bug When persistent workers are enabled, the epoch that's set via the IterableDataset instance held by the training process is ignored by the workers as they are disconnected across processes. PyTorch samplers for non-iterable datasets have a mechanism to sync this, datasets.IterableDataset does ...
false
2,138,732,288
https://api.github.com/repos/huggingface/datasets/issues/6672
https://github.com/huggingface/datasets/pull/6672
6,672
Remove deprecated verbose parameter from CSV builder
closed
3
2024-02-16T14:26:21
2024-02-19T09:26:34
2024-02-19T09:20:22
albertvillanova
[]
Remove deprecated `verbose` parameter from CSV builder. Note that the `verbose` parameter is deprecated since pandas 2.2.0. See: - https://github.com/pandas-dev/pandas/pull/56556 - https://github.com/pandas-dev/pandas/pull/57450 Fix #6671.
true
2,138,727,870
https://api.github.com/repos/huggingface/datasets/issues/6671
https://github.com/huggingface/datasets/issues/6671
6,671
CSV builder raises deprecation warning on verbose parameter
closed
0
2024-02-16T14:23:46
2024-02-19T09:20:23
2024-02-19T09:20:23
albertvillanova
[]
CSV builder raises a deprecation warning on `verbose` parameter: ``` FutureWarning: The 'verbose' keyword in pd.read_csv is deprecated and will be removed in a future version. ``` See: - https://github.com/pandas-dev/pandas/pull/56556 - https://github.com/pandas-dev/pandas/pull/57450
false
2,138,372,958
https://api.github.com/repos/huggingface/datasets/issues/6670
https://github.com/huggingface/datasets/issues/6670
6,670
ValueError
closed
2
2024-02-16T11:05:17
2024-02-17T04:26:34
2024-02-16T14:43:53
prashanth19bolukonda
[]
### Describe the bug ValueError Traceback (most recent call last) [<ipython-input-11-9b99bc80ec23>](https://localhost:8080/#) in <cell line: 11>() 9 import numpy as np 10 import matplotlib.pyplot as plt ---> 11 from datasets import DatasetDict, Dataset 12 from transf...
false
2,138,322,662
https://api.github.com/repos/huggingface/datasets/issues/6669
https://github.com/huggingface/datasets/issues/6669
6,669
attribute error when writing trainer.train()
closed
2
2024-02-16T10:40:49
2024-03-01T10:58:00
2024-02-29T17:25:17
prashanth19bolukonda
[]
### Describe the bug AttributeError Traceback (most recent call last) Cell In[39], line 2 1 # Start the training process ----> 2 trainer.train() File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore...
false
2,137,859,935
https://api.github.com/repos/huggingface/datasets/issues/6668
https://github.com/huggingface/datasets/issues/6668
6,668
Chapter 6 - Issue Loading `cnn_dailymail` dataset
open
0
2024-02-16T04:40:56
2024-02-16T04:40:56
null
hariravichandran
[]
### Describe the bug So I am getting this bug when I try to run cell 4 of the Chapter 6 notebook code: `dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0")` Error Message: ``` --------------------------------------------------------------------------- ValueError Tracebac...
false
2,137,769,552
https://api.github.com/repos/huggingface/datasets/issues/6667
https://github.com/huggingface/datasets/issues/6667
6,667
Default config for squad is incorrect
open
1
2024-02-16T02:36:55
2024-02-23T09:10:00
null
kiddyboots216
[]
### Describe the bug If you download Squad, it will download the plain_text version, but the config still specifies "default", so if you set the offline mode the cache will try to look it up according to the config_id which is "default" and this will say; ValueError: Couldn't find cache for squad for config 'default'...
false
2,136,136,425
https://api.github.com/repos/huggingface/datasets/issues/6665
https://github.com/huggingface/datasets/pull/6665
6,665
Allow SplitDict setitem to replace existing SplitInfo
closed
2
2024-02-15T10:17:08
2024-03-01T16:02:46
2024-03-01T15:56:38
lhoestq
[]
Fix this code provided by @clefourrier ```python import datasets import os token = os.getenv("TOKEN") results = datasets.load_dataset("gaia-benchmark/results_public", "2023", token=token, download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD) results["test"] = datasets.Dataset.from_list([row for row in resu...
true
2,135,483,978
https://api.github.com/repos/huggingface/datasets/issues/6664
https://github.com/huggingface/datasets/pull/6664
6,664
Revert the changes in `arrow_writer.py` from #6636
closed
5
2024-02-15T01:47:33
2024-02-16T14:02:39
2024-02-16T02:31:11
bryant1410
[]
#6636 broke `write_examples_on_file` and `write_batch` from the class `ArrowWriter`. I'm undoing these changes. See #6663. Note the current implementation doesn't keep the order of the columns and the schema, thus setting a wrong schema for each column.
true
2,135,480,811
https://api.github.com/repos/huggingface/datasets/issues/6663
https://github.com/huggingface/datasets/issues/6663
6,663
`write_examples_on_file` and `write_batch` are broken in `ArrowWriter`
closed
3
2024-02-15T01:43:27
2024-02-16T09:25:00
2024-02-16T09:25:00
bryant1410
[]
### Describe the bug `write_examples_on_file` and `write_batch` are broken in `ArrowWriter` since #6636. The order between the columns and the schema is not preserved anymore. So these functions don't work anymore unless the order happens to align well. ### Steps to reproduce the bug Try to do `write_batch` with any...
false
2,132,425,812
https://api.github.com/repos/huggingface/datasets/issues/6662
https://github.com/huggingface/datasets/pull/6662
6,662
fix: show correct package name to install biopython
closed
2
2024-02-13T14:15:04
2024-03-01T17:49:48
2024-03-01T17:43:39
BioGeek
[]
When you try to download a dataset that uses [biopython](https://github.com/biopython/biopython), like `load_dataset("InstaDeepAI/multi_species_genomes")`, you get the error: ``` >>> from datasets import load_dataset >>> dataset = load_dataset("InstaDeepAI/multi_species_genomes") /home/j.vangoey/.pyenv/versions/m...
true
2,132,296,267
https://api.github.com/repos/huggingface/datasets/issues/6661
https://github.com/huggingface/datasets/issues/6661
6,661
Import error on Google Colab
closed
4
2024-02-13T13:12:40
2024-02-25T16:37:54
2024-02-14T08:04:47
kithogue
[]
### Describe the bug Cannot be imported on Google Colab, the import throws the following error: ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject ### Steps to reproduce the bug 1. `! pip install -U datasets` 2. `import dataset...
false
2,131,977,011
https://api.github.com/repos/huggingface/datasets/issues/6660
https://github.com/huggingface/datasets/pull/6660
6,660
Automatic Conversion for uint16/uint32 to Compatible PyTorch Dtypes
closed
2
2024-02-13T10:24:33
2024-03-01T19:01:57
2024-03-01T18:52:37
mohalisad
[]
This PR addresses an issue encountered when utilizing uint16 or uint32 datatypes with datasets, followed by attempting to convert these datasets into PyTorch-compatible formats. Currently, doing so results in a TypeError due to incompatible datatype conversion, as illustrated by the following example: ```python from ...
true
2,129,229,810
https://api.github.com/repos/huggingface/datasets/issues/6659
https://github.com/huggingface/datasets/pull/6659
6,659
Change default compression argument for JsonDatasetWriter
closed
3
2024-02-11T23:49:07
2024-03-01T17:51:50
2024-03-01T17:44:55
Rexhaif
[]
Change default compression type from `None` to "infer", to align with pandas' defaults. Documentation asks the user to supply `to_json_kwargs` with arguments suitable for pandas' `to_json` method. At the same time, while pandas' by default uses ["infer"](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame....
true
2,129,158,371
https://api.github.com/repos/huggingface/datasets/issues/6658
https://github.com/huggingface/datasets/pull/6658
6,658
[Resumable IterableDataset] Add IterableDataset state_dict
closed
20
2024-02-11T20:35:52
2024-10-01T10:19:38
2024-06-03T19:15:39
lhoestq
[]
A simple implementation of a mechanism to resume an IterableDataset. It works by restarting at the latest shard and skip samples. It provides fast resuming (though not instantaneous). Example: ```python from datasets import Dataset, concatenate_datasets ds = Dataset.from_dict({"a": range(5)}).to_iterable_d...
true
2,129,147,085
https://api.github.com/repos/huggingface/datasets/issues/6657
https://github.com/huggingface/datasets/issues/6657
6,657
Release not pushed to conda channel
closed
5
2024-02-11T20:05:17
2024-03-06T15:06:22
2024-03-06T15:06:22
atulsaurav
[]
### Describe the bug The github actions step to publish the release 2.17.0 to conda channel has failed due to expired token. Can some one please update the anaconda token rerun the failed action? @albertvillanova ? ![image](https://github.com/huggingface/datasets/assets/7138162/1b56ad3d-7643-4778-9cce-4bf531717700...
false
2,127,338,377
https://api.github.com/repos/huggingface/datasets/issues/6656
https://github.com/huggingface/datasets/issues/6656
6,656
Error when loading a big local json file
open
2
2024-02-09T15:14:21
2024-11-29T10:06:57
null
Riccorl
[]
### Describe the bug When trying to load big json files from a local directory, `load_dataset` throws the following error ``` Traceback (most recent call last): File "/miniconda3/envs/conda-env/lib/python3.10/site-packages/datasets/builder.py", line 1989, in _prepare_split_single writer.write_table(table) ...
false
2,127,020,042
https://api.github.com/repos/huggingface/datasets/issues/6655
https://github.com/huggingface/datasets/issues/6655
6,655
Cannot load the dataset go_emotions
open
4
2024-02-09T12:15:39
2024-02-12T09:35:55
null
arame
[]
### Describe the bug When I run the following code I get an exception; `go_emotions = load_dataset("go_emotions")` > AttributeError Traceback (most recent call last) Cell In[6], [line 1](vscode-notebook-cell:?execution_count=6&line=1) ----> [1](vscode-notebook-cell:?execution_count=6&l...
false
2,126,939,358
https://api.github.com/repos/huggingface/datasets/issues/6654
https://github.com/huggingface/datasets/issues/6654
6,654
Batched dataset map throws exception that cannot cast fixed length array to Sequence
closed
2
2024-02-09T11:23:19
2024-02-12T08:26:53
2024-02-12T08:26:53
keesjandevries
[]
### Describe the bug I encountered a TypeError when batch processing a dataset with Sequence features in datasets package version 2.16.1. The error arises from a mismatch in handling fixed-size list arrays during the map function execution. Debugging pinpoints the issue to an if-statement in datasets/table.py, line 20...
false
2,126,831,929
https://api.github.com/repos/huggingface/datasets/issues/6653
https://github.com/huggingface/datasets/pull/6653
6,653
Set dev version
closed
2
2024-02-09T10:12:02
2024-02-09T10:18:20
2024-02-09T10:12:12
albertvillanova
[]
null
true
2,126,760,798
https://api.github.com/repos/huggingface/datasets/issues/6652
https://github.com/huggingface/datasets/pull/6652
6,652
Release: 2.17.0
closed
2
2024-02-09T09:25:01
2024-02-09T10:11:48
2024-02-09T10:05:35
albertvillanova
[]
null
true
2,126,649,626
https://api.github.com/repos/huggingface/datasets/issues/6651
https://github.com/huggingface/datasets/issues/6651
6,651
Slice splits support for datasets.load_from_disk
open
0
2024-02-09T08:00:21
2024-06-14T14:42:46
null
mhorlacher
[ "enhancement" ]
### Feature request Support for slice splits in `datasets.load_from_disk`, similar to how it's already supported for `datasets.load_dataset`. ### Motivation Slice splits are convienient in a numer of cases - adding support to `datasets.load_from_disk` would make working with local datasets easier and homogeniz...
false
2,125,680,991
https://api.github.com/repos/huggingface/datasets/issues/6650
https://github.com/huggingface/datasets/issues/6650
6,650
AttributeError: 'InMemoryTable' object has no attribute '_batches'
open
3
2024-02-08T17:11:26
2024-02-21T00:34:41
null
matsuobasho
[]
### Describe the bug ``` Traceback (most recent call last): File "finetune.py", line 103, in <module> main(args) File "finetune.py", line 45, in main data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer, File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict....
false
2,124,940,213
https://api.github.com/repos/huggingface/datasets/issues/6649
https://github.com/huggingface/datasets/pull/6649
6,649
Minor multi gpu doc improvement
closed
2
2024-02-08T11:17:24
2024-02-08T11:23:35
2024-02-08T11:17:35
lhoestq
[]
just added torch.no_grad and eval()
true
2,124,813,589
https://api.github.com/repos/huggingface/datasets/issues/6648
https://github.com/huggingface/datasets/pull/6648
6,648
Document usage of hfh cli instead of git
closed
2
2024-02-08T10:24:56
2024-02-08T13:57:41
2024-02-08T13:51:39
lhoestq
[]
(basically the same content as the hfh upload docs, but adapted for datasets)
true
2,123,397,569
https://api.github.com/repos/huggingface/datasets/issues/6647
https://github.com/huggingface/datasets/pull/6647
6,647
Update loading.mdx to include "jsonl" file loading.
open
2
2024-02-07T16:18:08
2024-02-08T15:34:17
null
mosheber
[]
* A small update to the documentation, noting the ability to load jsonl files.
true
2,123,134,128
https://api.github.com/repos/huggingface/datasets/issues/6646
https://github.com/huggingface/datasets/pull/6646
6,646
Better multi-gpu example
closed
3
2024-02-07T14:15:01
2024-02-09T17:43:32
2024-02-07T14:59:11
lhoestq
[]
Use Qwen1.5-0.5B-Chat as an easy example for multi-GPU the previous example was using a model for translation and the way it was setup was not really the right way to use the model.
true
2,122,956,818
https://api.github.com/repos/huggingface/datasets/issues/6645
https://github.com/huggingface/datasets/issues/6645
6,645
Support fsspec 2024.2
closed
1
2024-02-07T12:45:29
2024-02-29T15:12:19
2024-02-29T15:12:19
albertvillanova
[ "enhancement" ]
Support fsspec 2024.2. First, we should address: - #6644
false
2,122,955,282
https://api.github.com/repos/huggingface/datasets/issues/6644
https://github.com/huggingface/datasets/issues/6644
6,644
Support fsspec 2023.12
closed
1
2024-02-07T12:44:39
2024-02-29T15:12:18
2024-02-29T15:12:18
albertvillanova
[ "enhancement" ]
Support fsspec 2023.12 by handling previous and new glob behavior.
false
2,121,239,039
https://api.github.com/repos/huggingface/datasets/issues/6643
https://github.com/huggingface/datasets/issues/6643
6,643
Faiss GPU index cannot be serialised when passed to trainer
open
3
2024-02-06T16:41:00
2024-02-15T10:29:32
null
rubenweitzman
[]
### Describe the bug I am working on a retrieval project and encountering I have encountered two issues in the hugging face faiss integration: 1. I am trying to pass in a dataset with a faiss index to the Huggingface trainer. The code works for a cpu faiss index, but doesn't for a gpu one, getting error: ``` ...
false
2,119,085,766
https://api.github.com/repos/huggingface/datasets/issues/6642
https://github.com/huggingface/datasets/issues/6642
6,642
Differently dataset object saved than it is loaded.
closed
2
2024-02-05T17:28:57
2024-02-06T09:50:19
2024-02-06T09:50:19
MFajcik
[]
### Describe the bug Differently sized object is saved than it is loaded. ### Steps to reproduce the bug Hi, I save dataset in a following way: ``` dataset = load_dataset("json", data_files={ "train": os.path.join(input_folder, f"{task_met...
false
2,116,963,132
https://api.github.com/repos/huggingface/datasets/issues/6641
https://github.com/huggingface/datasets/issues/6641
6,641
unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte
closed
1
2024-02-04T08:49:31
2024-02-06T09:26:07
2024-02-06T09:11:45
Hughhuh
[]
### Describe the bug unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte ### Steps to reproduce the bug ``` import sys sys.getdefaultencoding() 'utf-8' from datasets import load_dataset print(f"Train dataset size: {len(dataset['train'])}") print(f"Test datase...
false
2,115,864,531
https://api.github.com/repos/huggingface/datasets/issues/6640
https://github.com/huggingface/datasets/issues/6640
6,640
Sign Language Support
open
0
2024-02-02T21:54:51
2024-02-02T21:54:51
null
Merterm
[ "enhancement" ]
### Feature request Currently, there are only several Sign Language labels, I would like to propose adding all the Signed Languages as new labels which are described in this ISO standard: https://www.evertype.com/standards/iso639/sign-language.html ### Motivation Datasets currently only have labels for several signe...
false
2,114,620,200
https://api.github.com/repos/huggingface/datasets/issues/6639
https://github.com/huggingface/datasets/pull/6639
6,639
Run download_and_prepare if missing splits
open
1
2024-02-02T10:36:49
2024-02-06T16:54:22
null
lhoestq
[]
A first step towards https://github.com/huggingface/datasets/issues/6529
true
2,113,329,257
https://api.github.com/repos/huggingface/datasets/issues/6638
https://github.com/huggingface/datasets/issues/6638
6,638
Cannot download wmt16 dataset
closed
1
2024-02-01T19:41:42
2024-02-01T20:07:29
2024-02-01T20:07:29
vidyasiv
[]
### Describe the bug As of this morning (PST) 2/1/2024, seeing the wmt16 dataset is missing from opus , could you suggest an alternative? ``` Downloading data files: 0%| | 0/4 [00:00<?, ?it/s]Tra...
false
2,113,025,975
https://api.github.com/repos/huggingface/datasets/issues/6637
https://github.com/huggingface/datasets/issues/6637
6,637
'with_format' is extremely slow when used together with 'interleave_datasets' or 'shuffle' on IterableDatasets
open
1
2024-02-01T17:16:54
2024-02-05T10:43:47
null
tobycrisford
[]
### Describe the bug If you: 1. Interleave two iterable datasets together with the interleave_datasets function, or shuffle an iterable dataset 2. Set the output format to torch tensors with .with_format('torch') Then iterating through the dataset becomes over 100x slower than it is if you don't apply the torch...
false
2,110,781,097
https://api.github.com/repos/huggingface/datasets/issues/6636
https://github.com/huggingface/datasets/pull/6636
6,636
Faster column validation and reordering
closed
3
2024-01-31T19:08:28
2024-02-07T19:39:00
2024-02-06T23:03:38
psmyth94
[]
I work with bioinformatics data and often these tables have thousands and even tens of thousands of features. These tables are also accompanied by metadata that I do not want to pass in the model. When I perform `set_format('pt', columns=large_column_list)` , it can take several minutes before it finishes. The culprit ...
true
2,110,659,519
https://api.github.com/repos/huggingface/datasets/issues/6635
https://github.com/huggingface/datasets/pull/6635
6,635
Fix missing info when loading some datasets from Parquet export
closed
2
2024-01-31T17:55:21
2024-02-07T16:48:55
2024-02-07T16:41:04
lhoestq
[]
Fix getting the info for script-based datasets with Parquet export with a single config not named "default". E.g. ```python from datasets import load_dataset_builder b = load_dataset_builder("bookcorpus") print(b.info.features) # should print {'text': Value(dtype='string', id=None)} ``` I fixed this by ...
true
2,110,242,376
https://api.github.com/repos/huggingface/datasets/issues/6634
https://github.com/huggingface/datasets/pull/6634
6,634
Support data_dir parameter in push_to_hub
closed
3
2024-01-31T14:37:36
2024-02-05T10:32:49
2024-02-05T10:26:40
albertvillanova
[]
Support `data_dir` parameter in `push_to_hub`. This allows users to organize the data files according to their specific needs. For example, "wikimedia/wikipedia" files could be organized by year and/or date, e.g. "2024/20240101/20240101.en".
true
2,110,124,475
https://api.github.com/repos/huggingface/datasets/issues/6633
https://github.com/huggingface/datasets/pull/6633
6,633
dataset viewer requires no-script
closed
2
2024-01-31T13:41:54
2024-01-31T14:05:04
2024-01-31T13:59:01
severo
[]
null
true
2,108,541,678
https://api.github.com/repos/huggingface/datasets/issues/6632
https://github.com/huggingface/datasets/pull/6632
6,632
Fix reload cache with data dir
closed
2
2024-01-30T18:52:23
2024-02-06T17:27:35
2024-02-06T17:21:24
lhoestq
[]
The cache used to only check for the latest cache directory with a given config_name, but it was wrong (e.g. `default-data_dir=data%2Ffortran-data_dir=data%2Ffortran` instead of `default-data_dir=data%2Ffortran`) I fixed this by not passing the `config_kwargs` to the parent Builder `__init__`, and passing the config...
true
2,107,802,473
https://api.github.com/repos/huggingface/datasets/issues/6631
https://github.com/huggingface/datasets/pull/6631
6,631
Fix filelock: use current umask for filelock >= 3.10
closed
2
2024-01-30T12:56:01
2024-01-30T15:34:49
2024-01-30T15:28:37
lhoestq
[]
reported in https://github.com/huggingface/evaluate/issues/542 cc @stas00 @williamberrios close https://github.com/huggingface/datasets/issues/6589
true
2,106,478,275
https://api.github.com/repos/huggingface/datasets/issues/6630
https://github.com/huggingface/datasets/pull/6630
6,630
Bump max range of dill to 0.3.8
closed
4
2024-01-29T21:35:55
2024-01-30T16:19:45
2024-01-30T15:12:25
ringohoffman
[]
Release on Jan 27, 2024: https://pypi.org/project/dill/0.3.8/#history
true
2,105,774,482
https://api.github.com/repos/huggingface/datasets/issues/6629
https://github.com/huggingface/datasets/pull/6629
6,629
Support push_to_hub without org/user to default to logged-in user
closed
3
2024-01-29T15:36:52
2024-02-05T12:35:43
2024-02-05T12:29:36
albertvillanova
[]
This behavior is aligned with: - the behavior of `datasets` before merging #6519 - the behavior described in the corresponding docstring - the behavior of `huggingface_hub.create_repo` Revert "Support push_to_hub canonical datasets (#6519)" - This reverts commit a887ee78835573f5d80f9e414e8443b4caff3541. Fix...
true
2,105,760,502
https://api.github.com/repos/huggingface/datasets/issues/6628
https://github.com/huggingface/datasets/pull/6628
6,628
Make CLI test support multi-processing
closed
3
2024-01-29T15:30:09
2024-02-05T10:29:20
2024-02-05T10:23:13
albertvillanova
[]
Support passing `--num_proc` to CLI test. This was really useful recently to run the command on `pubmed`: https://huggingface.co/datasets/pubmed/discussions/11
true
2,105,735,816
https://api.github.com/repos/huggingface/datasets/issues/6627
https://github.com/huggingface/datasets/pull/6627
6,627
Disable `tqdm` bars in non-interactive environments
closed
2
2024-01-29T15:18:21
2024-01-29T15:47:34
2024-01-29T15:41:32
mariosasko
[]
Replace `disable=False` with `disable=None` in the `tqdm` bars to disable them in non-interactive environments (by default). For more info, see a [similar PR](https://github.com/huggingface/huggingface_hub/pull/2000) in `huggingface_hub`.
true
2,105,482,522
https://api.github.com/repos/huggingface/datasets/issues/6626
https://github.com/huggingface/datasets/pull/6626
6,626
Raise error on bad split name
closed
2
2024-01-29T13:17:41
2024-01-29T15:18:25
2024-01-29T15:12:18
lhoestq
[]
e.g. dashes '-' are not allowed in split names This should add an error message on datasets with unsupported split names like https://huggingface.co/datasets/open-source-metrics/test cc @AndreaFrancis
true
2,103,950,718
https://api.github.com/repos/huggingface/datasets/issues/6624
https://github.com/huggingface/datasets/issues/6624
6,624
How to download the laion-coco dataset
closed
1
2024-01-28T03:56:05
2024-02-06T09:43:31
2024-02-06T09:43:31
vanpersie32
[]
The laion coco dataset is not available now. How to download it https://huggingface.co/datasets/laion/laion-coco
false
2,103,870,123
https://api.github.com/repos/huggingface/datasets/issues/6623
https://github.com/huggingface/datasets/issues/6623
6,623
streaming datasets doesn't work properly with multi-node
open
23
2024-01-27T23:46:13
2024-10-16T00:55:19
null
rohitgr7
[ "enhancement" ]
### Feature request Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it. Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt...
false
2,103,780,697
https://api.github.com/repos/huggingface/datasets/issues/6622
https://github.com/huggingface/datasets/issues/6622
6,622
multi-GPU map does not work
closed
1
2024-01-27T20:06:08
2024-02-08T11:18:21
2024-02-08T11:18:21
kopyl
[]
### Describe the bug Here is the code for single-GPU processing: https://pastebin.com/bfmEeK2y Here is the code for multi-GPU processing: https://pastebin.com/gQ7i5AQy Here is the video showing that the multi-GPU mapping does not work as expected (there are so many things wrong here, it's better to watch the 3-min...
false
2,103,675,294
https://api.github.com/repos/huggingface/datasets/issues/6621
https://github.com/huggingface/datasets/issues/6621
6,621
deleted
closed
0
2024-01-27T16:59:58
2024-01-27T17:14:43
2024-01-27T17:14:43
kopyl
[]
...
false
2,103,110,536
https://api.github.com/repos/huggingface/datasets/issues/6620
https://github.com/huggingface/datasets/issues/6620
6,620
wiki_dpr.py error (ID mismatch between lines {id} and vector {vec_id}
closed
1
2024-01-27T01:00:09
2024-02-06T09:40:19
2024-02-06T09:40:19
kiehls90
[]
### Describe the bug I'm trying to run a rag example, and the dataset is wiki_dpr. wiki_dpr download and extracting have been completed successfully. However, at the generating train split stage, an error from wiki_dpr.py keeps popping up. Especially in "_generate_examples" : 1. The following error occurs in the...
false
2,102,407,478
https://api.github.com/repos/huggingface/datasets/issues/6619
https://github.com/huggingface/datasets/pull/6619
6,619
Migrate from `setup.cfg` to `pyproject.toml`
closed
2
2024-01-26T15:27:10
2024-01-26T15:53:40
2024-01-26T15:47:32
mariosasko
[]
Based on https://github.com/huggingface/huggingface_hub/pull/1971 in `hfh`
true
2,101,868,198
https://api.github.com/repos/huggingface/datasets/issues/6618
https://github.com/huggingface/datasets/issues/6618
6,618
While importing load_dataset from datasets
closed
5
2024-01-26T09:21:57
2024-07-23T09:31:07
2024-02-06T09:25:54
suprith-hub
[]
### Describe the bug cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' this is the error i received ### Steps to reproduce the bug from datasets import load_dataset ### Expected behavior No errors ### Environment info python 3.11.5
false
2,100,459,449
https://api.github.com/repos/huggingface/datasets/issues/6617
https://github.com/huggingface/datasets/pull/6617
6,617
Fix CI: pyarrow 15, pandas 2.2 and sqlachemy
closed
2
2024-01-25T13:57:41
2024-01-26T14:56:46
2024-01-26T14:50:44
lhoestq
[]
this should fix the CI failures on `main` close https://github.com/huggingface/datasets/issues/5477
true
2,100,125,709
https://api.github.com/repos/huggingface/datasets/issues/6616
https://github.com/huggingface/datasets/pull/6616
6,616
Use schema metadata only if it matches features
closed
2
2024-01-25T11:01:14
2024-01-26T16:25:24
2024-01-26T16:19:12
lhoestq
[]
e.g. if we use `map` in arrow format and transform the table, the returned table might have new columns but the metadata might be wrong
true
2,098,951,409
https://api.github.com/repos/huggingface/datasets/issues/6615
https://github.com/huggingface/datasets/issues/6615
6,615
...
closed
1
2024-01-24T19:37:03
2024-01-24T19:42:30
2024-01-24T19:40:11
ftkeys
[]
...
false
2,098,884,520
https://api.github.com/repos/huggingface/datasets/issues/6614
https://github.com/huggingface/datasets/issues/6614
6,614
`datasets/downloads` cleanup tool
open
0
2024-01-24T18:52:10
2024-01-24T18:55:09
null
stas00
[ "enhancement" ]
### Feature request Splitting off https://github.com/huggingface/huggingface_hub/issues/1997 - currently `huggingface-cli delete-cache` doesn't take care of cleaning `datasets` temp files e.g. I discovered having millions of files under `datasets/downloads` cache, I had to do: ``` sudo find /data/huggingface/...
false
2,098,078,210
https://api.github.com/repos/huggingface/datasets/issues/6612
https://github.com/huggingface/datasets/issues/6612
6,612
cnn_dailymail repeats itself
closed
1
2024-01-24T11:38:25
2024-02-01T08:14:50
2024-02-01T08:14:50
KeremZaman
[]
### Describe the bug When I try to load `cnn_dailymail` dataset, it takes longer than usual and when I checked the dataset it's 3x bigger than it's supposed to be. Check https://huggingface.co/datasets/cnn_dailymail: it says 287k rows for train. But when I check length of train split it says 861339. Also I che...
false
2,096,004,858
https://api.github.com/repos/huggingface/datasets/issues/6611
https://github.com/huggingface/datasets/issues/6611
6,611
`load_from_disk` with large dataset from S3 runs into `botocore.exceptions.ClientError`
open
0
2024-01-23T12:37:57
2024-01-23T12:37:57
null
zotroneneis
[]
### Describe the bug When loading a large dataset (>1000GB) from S3 I run into the following error: ``` Traceback (most recent call last): File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 113, in _error_wrapper return await func(*args, **kwargs) File "/home/alp/.local/lib/python3....
false
2,095,643,711
https://api.github.com/repos/huggingface/datasets/issues/6610
https://github.com/huggingface/datasets/issues/6610
6,610
cast_column to Sequence(subfeatures_dict) has err
closed
2
2024-01-23T09:32:32
2024-01-25T02:15:23
2024-01-25T02:15:23
neiblegy
[]
### Describe the bug I am working with the following demo code: ``` from datasets import load_dataset from datasets.features import Sequence, Value, ClassLabel, Features ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/") ais_dataset = ais_dataset["train"] def add_class(example): ...
false
2,095,085,650
https://api.github.com/repos/huggingface/datasets/issues/6609
https://github.com/huggingface/datasets/issues/6609
6,609
Wrong path for cache directory in offline mode
closed
5
2024-01-23T01:47:19
2024-02-06T17:21:25
2024-02-06T17:21:25
je-santos
[]
### Describe the bug Dear huggingfacers, I'm trying to use a subset of the-stack dataset. When I run the command the first time ``` dataset = load_dataset( path='bigcode/the-stack', data_dir='data/fortran', split='train' ) ``` It downloads the files and caches them normally. Nevertheless, ...
false
2,094,153,292
https://api.github.com/repos/huggingface/datasets/issues/6608
https://github.com/huggingface/datasets/pull/6608
6,608
Add `with_rank` param to `Dataset.filter`
closed
2
2024-01-22T15:19:16
2024-01-29T16:43:11
2024-01-29T16:36:53
mariosasko
[]
Fix #6564
true
2,091,766,063
https://api.github.com/repos/huggingface/datasets/issues/6607
https://github.com/huggingface/datasets/pull/6607
6,607
Update features.py to avoid bfloat16 unsupported error
closed
3
2024-01-20T00:39:44
2024-05-17T09:46:29
2024-05-17T09:40:13
skaulintel
[]
Fixes https://github.com/huggingface/datasets/issues/6566 Let me know if there's any tests I need to clear.
true
2,091,088,785
https://api.github.com/repos/huggingface/datasets/issues/6606
https://github.com/huggingface/datasets/pull/6606
6,606
Dedicated RNG object for fingerprinting
closed
2
2024-01-19T18:34:47
2024-01-26T15:11:38
2024-01-26T15:05:34
mariosasko
[]
Closes https://github.com/huggingface/datasets/issues/6604, closes https://github.com/huggingface/datasets/issues/2775
true
2,090,188,376
https://api.github.com/repos/huggingface/datasets/issues/6605
https://github.com/huggingface/datasets/issues/6605
6,605
ELI5 no longer available, but referenced in example code
closed
1
2024-01-19T10:21:52
2024-02-01T17:58:23
2024-02-01T17:58:22
drdsgvo
[]
Here, an example code is given: https://huggingface.co/docs/transformers/tasks/language_modeling This code + article references the ELI5 dataset. ELI5 is no longer available, as the ELI5 dataset page states: https://huggingface.co/datasets/eli5 "Defunct: Dataset "eli5" is defunct and no longer accessible due to u...
false
2,089,713,945
https://api.github.com/repos/huggingface/datasets/issues/6604
https://github.com/huggingface/datasets/issues/6604
6,604
Transform fingerprint collisions due to setting fixed random seed
closed
2
2024-01-19T06:32:25
2024-01-26T15:05:35
2024-01-26T15:05:35
normster
[]
### Describe the bug The transform fingerprinting logic relies on the `random` library for random bits when the function is not hashable (e.g. bound methods as used in `trl`: https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py#L356). This causes collisions when the training code sets a fixed random...
false
2,089,230,766
https://api.github.com/repos/huggingface/datasets/issues/6603
https://github.com/huggingface/datasets/issues/6603
6,603
datasets map `cache_file_name` does not work
open
2
2024-01-18T23:08:30
2024-01-28T04:01:15
null
ChenchaoZhao
[]
### Describe the bug In the documentation `datasets.Dataset.map` arg `cache_file_name` is said to be a string, but it doesn't work. ### Steps to reproduce the bug 1. pick a dataset 2. write a map function 3. do `ds.map(..., cache_file_name='some_filename')` 4. it crashes ### Expected behavior It will tell you t...
false
2,089,217,483
https://api.github.com/repos/huggingface/datasets/issues/6602
https://github.com/huggingface/datasets/issues/6602
6,602
Index error when data is large
open
1
2024-01-18T23:00:47
2025-04-16T04:13:01
null
ChenchaoZhao
[]
### Describe the bug At `save_to_disk` step, the `max_shard_size` by default is `500MB`. However, one row of the dataset might be larger than `500MB` then the saving will throw an index error. Without looking at the source code, the bug is due to wrong calculation of number of shards which i think is `total_size / m...
false
2,088,624,054
https://api.github.com/repos/huggingface/datasets/issues/6601
https://github.com/huggingface/datasets/pull/6601
6,601
add safety checks when using only part of dataset
open
1
2024-01-18T16:16:59
2024-02-08T14:33:10
null
benseddikismail
[]
Added some checks to prevent errors that arrise when using evaluate.py on only a portion of the squad 2.0 dataset.
true
2,088,446,385
https://api.github.com/repos/huggingface/datasets/issues/6600
https://github.com/huggingface/datasets/issues/6600
6,600
Loading CSV exported dataset has unexpected format
open
2
2024-01-18T14:48:27
2024-01-23T14:42:32
null
OrianeN
[]
### Describe the bug I wanted to be able to save a HF dataset for translations and load it again in another script, but I'm a bit confused with the documentation and the result I've got so I'm opening this issue to ask if this behavior is as expected. ### Steps to reproduce the bug The documentation I've mainly cons...
false
2,086,684,664
https://api.github.com/repos/huggingface/datasets/issues/6599
https://github.com/huggingface/datasets/issues/6599
6,599
Easy way to segment into 30s snippets given an m4a file and a vtt file
closed
2
2024-01-17T17:51:40
2024-01-23T10:42:17
2024-01-22T15:35:49
RonanKMcGovern
[ "enhancement" ]
### Feature request Uploading datasets is straightforward thanks to the ability to push Audio to hub. However, it would be nice if the data (text and audio) could be segmented when being pushed (if not possible already). ### Motivation It's easy to create a vtt file from an audio file. If there could be auto-segment...
false
2,084,236,605
https://api.github.com/repos/huggingface/datasets/issues/6598
https://github.com/huggingface/datasets/issues/6598
6,598
Unexpected keyword argument 'hf' when downloading CSV dataset from S3
closed
8
2024-01-16T15:16:01
2025-01-31T15:35:33
2024-07-23T14:30:10
dguenms
[]
### Describe the bug I receive this error message when using `load_dataset` with "csv" path and `dataset_files=s3://...`: ``` TypeError: Session.__init__() got an unexpected keyword argument 'hf' ``` I found a similar issue here: https://stackoverflow.com/questions/77596258/aws-issue-load-dataset-from-s3-fails-w...
false
2,083,708,521
https://api.github.com/repos/huggingface/datasets/issues/6597
https://github.com/huggingface/datasets/issues/6597
6,597
Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace
closed
6
2024-01-16T11:27:07
2024-02-05T12:29:37
2024-02-05T12:29:37
albertvillanova
[ "bug" ]
While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace. ## Steps to reproduce the bug The command: ```python commit_info = ds.push_to_hub( "caner", config_name="default", commit_message="Convert dataset to Parquet", commit_descriptio...
false
2,083,108,156
https://api.github.com/repos/huggingface/datasets/issues/6596
https://github.com/huggingface/datasets/pull/6596
6,596
Drop redundant None guard.
closed
2
2024-01-16T06:31:54
2024-01-16T17:16:16
2024-01-16T17:05:52
xkszltl
[]
`xxx if xxx is not None else None` is no-op.
true
2,082,896,148
https://api.github.com/repos/huggingface/datasets/issues/6595
https://github.com/huggingface/datasets/issues/6595
6,595
Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2
closed
14
2024-01-16T02:03:09
2024-01-27T18:26:33
2024-01-26T02:28:32
kopyl
[]
### Describe the bug I'm aware of the issue #5695 . I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16 So i 1. Map dataset 2. Save to disk 3. Try to upload: ``` import data...
false
2,082,748,275
https://api.github.com/repos/huggingface/datasets/issues/6594
https://github.com/huggingface/datasets/issues/6594
6,594
IterableDataset sharding logic needs improvement
open
1
2024-01-15T22:22:36
2024-10-15T06:27:13
null
rwightman
[]
### Describe the bug The sharding of IterableDatasets with respect to distributed and dataloader worker processes appears problematic with significant performance traps and inconsistencies wrt to distributed train processes vs worker processes. Splitting across num_workers (per train process loader processes) and...
false
2,082,410,257
https://api.github.com/repos/huggingface/datasets/issues/6592
https://github.com/huggingface/datasets/issues/6592
6,592
Logs are delayed when doing .map when `docker logs`
closed
1
2024-01-15T17:05:21
2024-02-12T17:35:21
2024-02-12T17:35:21
kopyl
[]
### Describe the bug When I run my SD training in a Docker image and then listen to logs like `docker logs train -f`, the progress bar is delayed. It's updating every few percent. When you have a large dataset that has to be mapped (like 1+ million samples), it's crucial to see the updates in real-time, not every co...
false
2,082,378,957
https://api.github.com/repos/huggingface/datasets/issues/6591
https://github.com/huggingface/datasets/issues/6591
6,591
The datasets models housed in Dropbox can't support a lot of users downloading them
closed
1
2024-01-15T16:43:38
2024-01-22T23:18:09
2024-01-22T23:18:09
RDaneelOlivav
[]
### Describe the bug I'm using the datasets ``` from datasets import load_dataset, Audio dataset = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` And it seems that sometimes when I imagine a lot of users are accessing the same resources, the Dropbox host fails: `raise ConnectionError(...
false
2,082,000,084
https://api.github.com/repos/huggingface/datasets/issues/6590
https://github.com/huggingface/datasets/issues/6590
6,590
Feature request: Multi-GPU dataset mapping for SDXL training
open
0
2024-01-15T13:06:06
2024-01-15T13:07:07
null
kopyl
[ "enhancement" ]
### Feature request We need to speed up SDXL dataset pre-process. Please make it possible to use multiple GPUs for the [official SDXL trainer](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py) :) ### Motivation Pre-computing 3 million of images takes around ...
false
2,081,358,619
https://api.github.com/repos/huggingface/datasets/issues/6589
https://github.com/huggingface/datasets/issues/6589
6,589
After `2.16.0` version, there are `PermissionError` when users use shared cache_dir
closed
2
2024-01-15T06:46:27
2024-02-02T07:55:38
2024-01-30T15:28:38
minhopark-neubla
[]
### Describe the bug - We use shared `cache_dir` using `HF_HOME="{shared_directory}"` - After dataset version 2.16.0, datasets uses `filelock` package for file locking #6445 - But, `filelock` package make `.lock` file with `644` permission - Dataset is not available to other users except the user who created the ...
false
2,081,284,253
https://api.github.com/repos/huggingface/datasets/issues/6588
https://github.com/huggingface/datasets/issues/6588
6,588
fix os.listdir return name is empty string
closed
0
2024-01-15T05:34:36
2024-01-24T10:08:29
2024-01-24T10:08:29
d710055071
[]
### Describe the bug xlistdir return name is empty string Overloaded os.listdir ### Steps to reproduce the bug ```python from datasets.download.streaming_download_manager import xjoin from datasets.download.streaming_download_manager import xlistdir config = DownloadConfig(storage_options=options) manger = Str...
false
2,080,348,016
https://api.github.com/repos/huggingface/datasets/issues/6587
https://github.com/huggingface/datasets/pull/6587
6,587
Allow concatenation of datasets with mixed structs
closed
3
2024-01-13T15:33:20
2024-02-15T15:20:06
2024-02-08T14:38:32
Dref360
[]
Fixes #6466 The idea is to do a recursive check for structs. PyArrow handles it well enough. For a demo you can do: ```python from datasets import Dataset, concatenate_datasets ds = Dataset.from_dict({'speaker': [{'name': 'Ben', 'email': None}]}) ds2 = Dataset.from_dict({'speaker': [{'name': 'Fred', 'e...
true
2,079,192,651
https://api.github.com/repos/huggingface/datasets/issues/6586
https://github.com/huggingface/datasets/pull/6586
6,586
keep more info in DatasetInfo.from_merge #6585
closed
4
2024-01-12T16:08:16
2024-01-26T15:59:35
2024-01-26T15:53:28
JochenSiegWork
[]
* try not to merge DatasetInfos if they're equal * fixes losing DatasetInfo during parallel Dataset.map
true
2,078,874,005
https://api.github.com/repos/huggingface/datasets/issues/6585
https://github.com/huggingface/datasets/issues/6585
6,585
losing DatasetInfo in Dataset.map when num_proc > 1
open
2
2024-01-12T13:39:19
2024-01-12T14:08:24
null
JochenSiegWork
[]
### Describe the bug Hello and thanks for developing this package! When I process a Dataset with the map function using multiple processors some set attributes of the DatasetInfo get lost and are None in the resulting Dataset. ### Steps to reproduce the bug ```python from datasets import Dataset, DatasetInfo...
false
2,078,454,878
https://api.github.com/repos/huggingface/datasets/issues/6584
https://github.com/huggingface/datasets/issues/6584
6,584
np.fromfile not supported
open
6
2024-01-12T09:46:17
2024-01-15T05:20:50
null
d710055071
[]
How to do np.fromfile to use it like np.load ```python def xnumpy_fromfile(filepath_or_buffer, *args, download_config: Optional[DownloadConfig] = None, **kwargs): import numpy as np if hasattr(filepath_or_buffer, "read"): return np.fromfile(filepath_or_buffer, *args, **kwargs) else: ...
false
2,077,049,491
https://api.github.com/repos/huggingface/datasets/issues/6583
https://github.com/huggingface/datasets/pull/6583
6,583
remove eli5 test
closed
2
2024-01-11T16:05:20
2024-01-11T16:15:34
2024-01-11T16:09:24
lhoestq
[]
since the dataset is defunct
true
2,076,072,101
https://api.github.com/repos/huggingface/datasets/issues/6582
https://github.com/huggingface/datasets/pull/6582
6,582
Fix for Incorrect ex_iterable used with multi num_worker
closed
2
2024-01-11T08:49:43
2024-03-01T19:09:14
2024-03-01T19:02:33
kq-chen
[]
Corrects an issue where `self._ex_iterable` was erroneously used instead of `ex_iterable`, when both Distributed Data Parallel (DDP) and multi num_worker are used concurrently. This improper usage led to the generation of incorrect `shards_indices`, subsequently causing issues with the control flow responsible for work...
true
2,075,919,265
https://api.github.com/repos/huggingface/datasets/issues/6581
https://github.com/huggingface/datasets/pull/6581
6,581
fix os.listdir return name is empty string
closed
4
2024-01-11T07:10:55
2024-01-24T10:14:43
2024-01-24T10:08:28
d710055071
[]
fix #6588 xlistdir return name is empty string for example: ` from datasets.download.streaming_download_manager import xjoin from datasets.download.streaming_download_manager import xlistdir config = DownloadConfig(storage_options=options) manger = StreamingDownloadManager("ILSVRC2012",download_config=config...
true