id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
2,229,207,521
https://api.github.com/repos/huggingface/datasets/issues/6788
https://github.com/huggingface/datasets/issues/6788
6,788
A Question About the Map Function
closed
2
2024-04-06T11:45:23
2024-04-11T05:29:35
2024-04-11T05:29:35
codeprompter
[]
### Describe the bug Hello, I have a question regarding the map function in the Hugging Face datasets. The situation is as follows: when I load a jsonl file using load_dataset(..., streaming=False), and then utilize the map function to process it, I specify that the returned example should be of type Torch.ten...
false
2,229,103,264
https://api.github.com/repos/huggingface/datasets/issues/6787
https://github.com/huggingface/datasets/issues/6787
6,787
TimeoutError in map
open
7
2024-04-06T06:25:39
2024-08-14T02:09:57
null
Jiaxin-Wen
[]
### Describe the bug ```python from datasets import Dataset def worker(example): while True: continue example['a'] = 100 return example data = Dataset.from_list([{"a": 1}, {"a": 2}]) data = data.map(worker) print(data[0]) ``` I'm implementing a worker function whose runtime will de...
false
2,228,463,776
https://api.github.com/repos/huggingface/datasets/issues/6786
https://github.com/huggingface/datasets/pull/6786
6,786
Make Image cast storage faster
open
8
2024-04-05T17:00:46
2024-10-01T09:09:14
null
Modexus
[]
PR for issue #6782. Makes `cast_storage` of the `Image` class faster by removing the slow call to `.pylist`. Instead directly convert each `ListArray` item to either `Array2DExtensionType` or `Array3DExtensionType`. This also preserves the `dtype` removing the warning if the array is already `uint8`.
true
2,228,429,852
https://api.github.com/repos/huggingface/datasets/issues/6785
https://github.com/huggingface/datasets/pull/6785
6,785
rename datasets-server to dataset-viewer
closed
2
2024-04-05T16:37:05
2024-04-08T12:41:13
2024-04-08T12:35:02
severo
[]
See https://github.com/huggingface/dataset-viewer/issues/2650 Tell me if it's OK, or if it's a breaking change that must be handled differently. Also note that the docs page is still https://huggingface.co/docs/datasets-server/, so I didn't change it. And the API URL is still https://datasets-server.huggingfac...
true
2,228,390,504
https://api.github.com/repos/huggingface/datasets/issues/6784
https://github.com/huggingface/datasets/pull/6784
6,784
Extract data on the fly in packaged builders
closed
3
2024-04-05T16:12:25
2024-04-16T16:37:47
2024-04-16T16:31:29
mariosasko
[]
Instead of waiting for data files to be extracted in the packaged builders, we can prepend the compression prefix and extract them as they are being read (using `fsspec`). This saves disk space (deleting extracted archives is not set by default) and slightly speeds up dataset generation (less disk reads)
true
2,228,179,466
https://api.github.com/repos/huggingface/datasets/issues/6783
https://github.com/huggingface/datasets/issues/6783
6,783
AttributeError: module 'numpy' has no attribute 'object'. in Kaggle Notebook
closed
2
2024-04-05T14:31:48
2024-04-11T17:18:53
2024-04-11T17:18:53
petrov826
[]
### Describe the bug # problem I can't resample audio dataset in Kaggle Notebook. It looks like some code in `datasets` library use aliases that were deprecated in NumPy 1.20. ## code for resampling ``` from datasets import load_dataset, Audio from transformers import AutoFeatureExtractor from transformers imp...
false
2,228,081,955
https://api.github.com/repos/huggingface/datasets/issues/6782
https://github.com/huggingface/datasets/issues/6782
6,782
Image cast_storage very slow for arrays (e.g. numpy, tensors)
open
3
2024-04-05T13:46:54
2024-04-10T14:36:13
null
Modexus
[]
Update: see comments below ### Describe the bug Operations that save an image from a path are very slow. I believe the reason for this is that the image data (`numpy`) is converted into `pyarrow` format but then back to python using `.pylist()` before being converted to a numpy array again. `pylist` is alread...
false
2,228,026,497
https://api.github.com/repos/huggingface/datasets/issues/6781
https://github.com/huggingface/datasets/pull/6781
6,781
Remove get_inferred_type from ArrowWriter write_batch
closed
2
2024-04-05T13:21:05
2024-04-09T07:49:11
2024-04-09T07:49:11
Modexus
[]
Inferring the type seems to be unnecessary given that the pyarrow array has already been created. Because pyarrow array creation is sometimes extremely slow this doubles the time write_batch takes.
true
2,226,160,096
https://api.github.com/repos/huggingface/datasets/issues/6780
https://github.com/huggingface/datasets/pull/6780
6,780
Fix CI
closed
2
2024-04-04T17:45:04
2024-04-04T18:46:04
2024-04-04T18:23:34
mariosasko
[]
Updates the `wmt_t2t` test to pin the `revision` to the version with a loading script (cc @albertvillanova). Additionally, it replaces the occurrences of the `lhoestq/test` repo id with `hf-internal-testing/dataset_with_script` and re-enables logging checks in the `Dataset.from_sql` tests.
true
2,226,075,551
https://api.github.com/repos/huggingface/datasets/issues/6779
https://github.com/huggingface/datasets/pull/6779
6,779
Install dependencies with `uv` in CI
closed
2
2024-04-04T17:02:51
2024-04-08T13:34:01
2024-04-08T13:27:44
mariosasko
[]
`diffusers` (https://github.com/huggingface/diffusers/pull/7116) and `huggingface_hub` (https://github.com/huggingface/huggingface_hub/pull/2072) also use `uv` to install their dependencies, so we can do the same here. It seems to make the "Install dependencies" step in the `ubuntu` jobs 5-8x faster and 1.5-2x in th...
true
2,226,040,636
https://api.github.com/repos/huggingface/datasets/issues/6778
https://github.com/huggingface/datasets/issues/6778
6,778
Dataset.to_csv() missing commas in columns with lists
open
1
2024-04-04T16:46:13
2024-04-08T15:24:41
null
mpickard-dataprof
[]
### Describe the bug The `to_csv()` method does not output commas in lists. So when the Dataset is loaded back in the data structure of the column with a list is not correct. Here's an example: Obviously, it's not as trivial as inserting commas in the list, since its a comma-separated file. But hopefully there...
false
2,224,611,247
https://api.github.com/repos/huggingface/datasets/issues/6777
https://github.com/huggingface/datasets/issues/6777
6,777
.Jsonl metadata not detected
open
5
2024-04-04T06:31:53
2024-04-05T21:14:48
null
nighting0le01
[]
### Describe the bug Hi I have the following directory structure: |--dataset | |-- images | |-- metadata1000.csv | |-- metadata1000.jsonl | |-- padded_images Example of metadata1000.jsonl file {"caption": "a drawing depicts a full shot of a black t-shirt with a triangular pattern on the front there is a white...
false
2,223,457,792
https://api.github.com/repos/huggingface/datasets/issues/6775
https://github.com/huggingface/datasets/issues/6775
6,775
IndexError: Invalid key: 0 is out of bounds for size 0
open
7
2024-04-03T17:06:30
2024-04-08T01:24:35
null
kk2491
[]
### Describe the bug I am trying to fine-tune llama2-7b model in GCP. The notebook I am using for this can be found [here](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_llama2_peft_finetuning.ipynb). When I use the dataset given in the exa...
false
2,222,164,316
https://api.github.com/repos/huggingface/datasets/issues/6774
https://github.com/huggingface/datasets/issues/6774
6,774
Generating split is very slow when Image format is PNG
open
1
2024-04-03T07:47:31
2024-04-10T17:28:17
null
Tramac
[]
### Describe the bug When I create a dataset, it gets stuck while generating cached data. The image format is PNG, and it will not get stuck when the image format is jpeg. ![image](https://github.com/huggingface/datasets/assets/22740819/3b888fd8-e6d6-488f-b828-95a8f206a152) After debugging, I know that it is b...
false
2,221,049,121
https://api.github.com/repos/huggingface/datasets/issues/6773
https://github.com/huggingface/datasets/issues/6773
6,773
Dataset on Hub re-downloads every time?
closed
5
2024-04-02T17:23:22
2024-04-08T18:43:45
2024-04-08T18:43:45
manestay
[]
### Describe the bug Hi, I have a dataset on the hub [here](https://huggingface.co/datasets/manestay/borderlines). It has 1k+ downloads, which I sure is mostly just me and my colleagues working with it. It should have far fewer, since I'm using the same machine with a properly set up HF_HOME variable. However, whene...
false
2,220,851,533
https://api.github.com/repos/huggingface/datasets/issues/6772
https://github.com/huggingface/datasets/pull/6772
6,772
`remove_columns`/`rename_columns` doc fixes
closed
2
2024-04-02T15:41:28
2024-04-02T16:28:45
2024-04-02T16:17:46
mariosasko
[]
Use more consistent wording in `remove_columns` to explain why it's faster than `map` and update `remove_columns`/`rename_columns` docstrings to fix in-place calls. Reported in https://github.com/huggingface/datasets/issues/6700
true
2,220,131,457
https://api.github.com/repos/huggingface/datasets/issues/6771
https://github.com/huggingface/datasets/issues/6771
6,771
Datasets FileNotFoundError when trying to generate examples.
closed
2
2024-04-02T10:24:57
2024-04-04T14:22:03
2024-04-04T14:22:03
RitchieP
[]
### Discussed in https://github.com/huggingface/datasets/discussions/6768 <div type='discussions-op-text'> <sup>Originally posted by **RitchieP** April 1, 2024</sup> Currently, I have a dataset hosted on Huggingface with a custom script [here](https://huggingface.co/datasets/RitchieP/VerbaLex_voice). I'm loa...
false
2,218,991,883
https://api.github.com/repos/huggingface/datasets/issues/6770
https://github.com/huggingface/datasets/issues/6770
6,770
[Bug Report] `datasets==2.18.0` is not compatible with `fsspec==2023.12.2`
closed
1
2024-04-01T20:17:48
2024-04-11T17:31:44
2024-04-11T17:31:44
fshp971
[]
### Describe the bug `Datasets==2.18.0` is not compatible with `fsspec==2023.12.2`. I have to downgrade fsspec to `fsspec==2023.10.0` to make `Datasets==2.18.0` work properly. ### Steps to reproduce the bug To reproduce the bug: 1. Make sure that `Datasets==2.18.0` and `fsspec==2023.12.2`. 2. Run the following ...
false
2,218,242,015
https://api.github.com/repos/huggingface/datasets/issues/6769
https://github.com/huggingface/datasets/issues/6769
6,769
(Willing to PR) Datasets with custom python objects
open
0
2024-04-01T13:18:47
2024-04-01T13:36:58
null
fzyzcjy
[ "enhancement" ]
### Feature request Hi thanks for the library! I would like to have a huggingface Dataset, and one of its column is custom (non-serializable) Python objects. For example, a minimal code: ``` class MyClass: pass dataset = datasets.Dataset.from_list([ dict(a=MyClass(), b='hello'), ]) ``` It gives...
false
2,217,065,412
https://api.github.com/repos/huggingface/datasets/issues/6767
https://github.com/huggingface/datasets/pull/6767
6,767
fixing the issue 6755(small typo)
closed
2
2024-03-31T16:13:37
2024-04-02T14:14:02
2024-04-02T14:01:18
JINO-ROHIT
[]
Fixed the issue #6755 on the typo mistake
true
2,215,933,515
https://api.github.com/repos/huggingface/datasets/issues/6765
https://github.com/huggingface/datasets/issues/6765
6,765
Compatibility issue between s3fs, fsspec, and datasets
closed
4
2024-03-29T19:57:24
2024-11-12T14:50:48
2024-04-03T14:33:12
njbrake
[]
### Describe the bug Here is the full error stack when installing: ``` ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. datasets 2.18.0 requires fsspec[http]<=2024.2.0,>=2023.1.0, but you ...
false
2,215,767,119
https://api.github.com/repos/huggingface/datasets/issues/6764
https://github.com/huggingface/datasets/issues/6764
6,764
load_dataset can't work with symbolic links
open
1
2024-03-29T17:49:28
2025-04-29T15:06:28
null
VladimirVincan
[ "enhancement" ]
### Feature request Enable the `load_dataset` function to load local datasets with symbolic links. E.g, this dataset can be loaded: ├── example_dataset/ │ ├── data/ │ │ ├── train/ │ │ │ ├── file0 │ │ │ ├── file1 │ │ ├── dev/ │ │ │ ├── file2 │ │ │ ├── file3 │ ├── metad...
false
2,213,440,804
https://api.github.com/repos/huggingface/datasets/issues/6763
https://github.com/huggingface/datasets/pull/6763
6,763
Fix issue with case sensitivity when loading dataset from local cache
open
1
2024-03-28T14:52:35
2024-04-20T12:16:45
null
Sumsky21
[]
When a dataset with upper-cases in its name is first loaded using `load_dataset()`, the local cache directory is created with all lowercase letters. However, upon subsequent loads, the current version attempts to locate the cache directory using the dataset's original name, which includes uppercase letters. This di...
true
2,213,275,468
https://api.github.com/repos/huggingface/datasets/issues/6762
https://github.com/huggingface/datasets/pull/6762
6,762
Allow polars as valid output type
closed
3
2024-03-28T13:40:28
2024-08-16T15:54:37
2024-08-16T13:10:37
psmyth94
[]
I was trying out polars as an output for a map function and found that it wasn't a valid return type in `validate_function_output`. Thought that we should accommodate this by creating and adding it to the `allowed_processed_input_types` variable.
true
2,212,805,108
https://api.github.com/repos/huggingface/datasets/issues/6761
https://github.com/huggingface/datasets/pull/6761
6,761
Remove deprecated code
closed
5
2024-03-28T09:57:57
2024-03-29T13:27:26
2024-03-29T13:18:13
Wauplin
[]
What does this PR do? 1. remove `list_files_info` in favor of `list_repo_tree`. As of `0.23`, `list_files_info` will be removed for good. `datasets` had a utility to support both pre-0.20 and post-0.20 versions. Since `hfh` version is already pinned to `>=0.21.2`, I removed the legacy part. 2. `preupload_lfs_files` h...
true
2,212,288,122
https://api.github.com/repos/huggingface/datasets/issues/6760
https://github.com/huggingface/datasets/issues/6760
6,760
Load codeparrot/apps raising UnicodeDecodeError in datasets-2.18.0
open
4
2024-03-28T03:44:26
2024-06-19T07:06:40
null
yucc-leon
[]
### Describe the bug This happens with datasets-2.18.0; I downgraded the version to 2.14.6 fixing this temporarily. ``` Traceback (most recent call last): File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset builder_instance = load_dataset_builder...
false
2,208,892,891
https://api.github.com/repos/huggingface/datasets/issues/6759
https://github.com/huggingface/datasets/issues/6759
6,759
Persistent multi-process Pool
open
0
2024-03-26T17:35:25
2024-03-26T17:35:25
null
fostiropoulos
[ "enhancement" ]
### Feature request Running .map and filter functions with `num_procs` consecutively instantiates several multiprocessing pools iteratively. As instantiating a Pool is very resource intensive it can be a bottleneck to performing iteratively filtering. My ideas: 1. There should be an option to declare `persist...
false
2,208,494,302
https://api.github.com/repos/huggingface/datasets/issues/6758
https://github.com/huggingface/datasets/issues/6758
6,758
Passing `sample_by` to `load_dataset` when loading text data does not work
closed
1
2024-03-26T14:55:33
2024-04-09T11:27:59
2024-04-09T11:27:59
ntoxeg
[]
### Describe the bug I have a dataset that consists of a bunch of text files, each representing an example. There is an undocumented `sample_by` argument for the `TextConfig` class that is used by `Text` to decide whether to split files into lines, paragraphs or take them whole. Passing `sample_by=“document”` to `load...
false
2,206,280,340
https://api.github.com/repos/huggingface/datasets/issues/6757
https://github.com/huggingface/datasets/pull/6757
6,757
Test disabling transformers containers in docs CI
open
3
2024-03-25T17:16:11
2024-03-27T16:26:35
null
Wauplin
[]
Related to https://github.com/huggingface/doc-builder/pull/487 and [internal slack thread](https://huggingface.slack.com/archives/C04F8N7FQNL/p1711384899462349?thread_ts=1711041424.720769&cid=C04F8N7FQNL). There is now a `custom_container` option when building docs in CI. When set to `""` (instead of `"huggingface/tran...
true
2,205,557,725
https://api.github.com/repos/huggingface/datasets/issues/6756
https://github.com/huggingface/datasets/issues/6756
6,756
Support SQLite files?
closed
3
2024-03-25T11:48:05
2024-03-26T16:09:32
2024-03-26T16:09:32
severo
[ "enhancement" ]
### Feature request Support loading a dataset from a SQLite file https://huggingface.co/datasets/severo/test_iris_sqlite/tree/main ### Motivation SQLite is a popular file format. ### Your contribution See discussion on slack: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1702481859117909 (internal) In ...
false
2,204,573,289
https://api.github.com/repos/huggingface/datasets/issues/6755
https://github.com/huggingface/datasets/issues/6755
6,755
Small typo on the documentation
closed
3
2024-03-24T21:47:52
2024-04-02T14:01:19
2024-04-02T14:01:19
fostiropoulos
[ "good first issue" ]
### Describe the bug There is a small typo on https://github.com/huggingface/datasets/blob/d5468836fe94e8be1ae093397dd43d4a2503b926/src/datasets/dataset_dict.py#L938 It should be `caching is enabled`. ### Steps to reproduce the bug Please visit https://github.com/huggingface/datasets/blob/d5468836fe94e...
false
2,204,214,595
https://api.github.com/repos/huggingface/datasets/issues/6754
https://github.com/huggingface/datasets/pull/6754
6,754
Fix cache path to snakecase for `CachedDatasetModuleFactory` and `Cache`
closed
6
2024-03-24T06:59:15
2024-04-15T15:45:44
2024-04-15T15:38:51
izhx
[]
Fix https://github.com/huggingface/datasets/issues/6750#issuecomment-2016678729 I didn't find a guideline on how to run the tests, so i just run the following steps to make sure that this bug is fixed. 1. `python test.py`, 2. then `HF_DATASETS_OFFLINE=1 python test.py` The `test.py` is ``` import datasets ...
true
2,204,155,091
https://api.github.com/repos/huggingface/datasets/issues/6753
https://github.com/huggingface/datasets/issues/6753
6,753
Type error when importing datasets on Kaggle
closed
8
2024-03-24T03:01:30
2024-10-02T11:49:35
2024-03-30T00:23:49
jtv199
[]
### Describe the bug When trying to run ``` import datasets print(datasets.__version__) ``` It generates the following error ``` TypeError: expected string or bytes-like object ``` It looks like It cannot find the valid versions of `fsspec` though fsspec version is fine when I checked Via command ...
false
2,204,043,839
https://api.github.com/repos/huggingface/datasets/issues/6752
https://github.com/huggingface/datasets/issues/6752
6,752
Precision being changed from float16 to float32 unexpectedly
open
1
2024-03-23T20:53:56
2024-04-10T15:21:33
null
gcervantes8
[]
### Describe the bug I'm loading a HuggingFace Dataset for images. I'm running a preprocessing (map operation) step that runs a few operations, one of them being conversion to float16. The Dataset features also say that the 'img' is of type float16. Whenever I take an image from that HuggingFace Dataset instance...
false
2,203,951,501
https://api.github.com/repos/huggingface/datasets/issues/6751
https://github.com/huggingface/datasets/pull/6751
6,751
Use 'with' operator for some download functions
closed
2
2024-03-23T16:32:08
2024-03-26T00:40:57
2024-03-26T00:40:57
Moisan
[]
Some functions in `streaming_download_manager.py` are not closing the file they open which lead to `Unclosed file` warnings in our code. This fixes a few of them.
true
2,203,590,658
https://api.github.com/repos/huggingface/datasets/issues/6750
https://github.com/huggingface/datasets/issues/6750
6,750
`load_dataset` requires a network connection for local download?
closed
3
2024-03-23T01:06:32
2024-04-15T15:38:52
2024-04-15T15:38:52
MiroFurtado
[]
### Describe the bug Hi all - I see that in the past a network dependency has been mistakenly introduced into `load_dataset` even for local loads. Is it possible this has happened again? ### Steps to reproduce the bug ``` >>> import datasets >>> datasets.load_dataset("hh-rlhf") Repo card metadata block was not ...
false
2,202,310,116
https://api.github.com/repos/huggingface/datasets/issues/6749
https://github.com/huggingface/datasets/pull/6749
6,749
Fix fsspec tqdm callback
closed
2
2024-03-22T11:44:11
2024-03-22T14:51:45
2024-03-22T14:45:39
lhoestq
[]
Following changes at https://github.com/fsspec/filesystem_spec/pull/1497 for `fsspec>=2024.2.0`
true
2,201,517,348
https://api.github.com/repos/huggingface/datasets/issues/6748
https://github.com/huggingface/datasets/issues/6748
6,748
Strange slicing behavior
open
1
2024-03-22T01:49:13
2024-03-22T16:43:57
null
Luciennnnnnn
[]
### Describe the bug I have loaded a dataset, and then slice first 300 samples using `:` ops, however, the resulting dataset is not expected, as the output below: ```bash len(dataset)=1050324 len(dataset[:300])=2 len(dataset[0:300])=2 len(dataset.select(range(300)))=300 ``` ### Steps to reproduce the bug loa...
false
2,201,219,384
https://api.github.com/repos/huggingface/datasets/issues/6747
https://github.com/huggingface/datasets/pull/6747
6,747
chore(deps): bump fsspec
closed
2
2024-03-21T21:25:49
2024-03-22T16:40:15
2024-03-22T16:28:40
shcheklein
[]
There were a few fixes released recently, some DVC ecosystem packages require newer version of `fsspec`.
true
2,198,993,949
https://api.github.com/repos/huggingface/datasets/issues/6746
https://github.com/huggingface/datasets/issues/6746
6,746
ExpectedMoreSplits error when loading C4 dataset
closed
8
2024-03-21T02:53:04
2024-09-18T19:57:14
2024-07-29T07:21:08
billwang485
[]
### Describe the bug I encounter bug when running the example command line ```python python main.py \ --model decapoda-research/llama-7b-hf \ --prune_method wanda \ --sparsity_ratio 0.5 \ --sparsity_type unstructured \ --save out/llama_7b/unstructured/wanda/ ``` The bug occurred ...
false
2,198,541,732
https://api.github.com/repos/huggingface/datasets/issues/6745
https://github.com/huggingface/datasets/issues/6745
6,745
Scraping the whole of github including private repos is bad; kindly stop
closed
1
2024-03-20T20:54:06
2024-03-21T12:28:04
2024-03-21T10:24:56
ghost
[ "enhancement" ]
### Feature request https://github.com/bigcode-project/opt-out-v2 - opt out is not consent. kindly quit this ridiculous nonsense. ### Motivation [EDITED: insults not tolerated] ### Your contribution [EDITED: insults not tolerated]
false
2,197,910,168
https://api.github.com/repos/huggingface/datasets/issues/6744
https://github.com/huggingface/datasets/issues/6744
6,744
Option to disable file locking
open
0
2024-03-20T15:59:45
2024-03-20T15:59:45
null
VRehnberg
[ "enhancement" ]
### Feature request Commands such as `load_dataset` creates file locks with `filelock.FileLock`. It would be good if there was a way to disable this. ### Motivation File locking doesn't work on all file-systems (in my case NFS mounted Weka). If the `cache_dir` only had small files then it would be possible to point ...
false
2,195,481,697
https://api.github.com/repos/huggingface/datasets/issues/6743
https://github.com/huggingface/datasets/pull/6743
6,743
Allow null values in dict columns
closed
3
2024-03-19T16:54:22
2024-04-08T13:08:42
2024-03-19T20:05:19
mariosasko
[]
Fix #6738
true
2,195,134,854
https://api.github.com/repos/huggingface/datasets/issues/6742
https://github.com/huggingface/datasets/pull/6742
6,742
Fix missing download_config in get_data_patterns
closed
2
2024-03-19T14:29:25
2024-03-19T18:24:39
2024-03-19T18:15:13
lhoestq
[]
Reported in https://github.com/huggingface/datasets-server/issues/2607
true
2,194,626,108
https://api.github.com/repos/huggingface/datasets/issues/6741
https://github.com/huggingface/datasets/pull/6741
6,741
Fix offline mode with single config
closed
2
2024-03-19T10:48:32
2024-03-25T16:35:21
2024-03-25T16:23:59
lhoestq
[]
Reported in https://github.com/huggingface/datasets/issues/4760 The cache was not able to reload a dataset with a single config form the cache if the config name is not specificed For example ```python from datasets import load_dataset, config config.HF_DATASETS_OFFLINE = True load_dataset("openai_human...
true
2,193,172,074
https://api.github.com/repos/huggingface/datasets/issues/6740
https://github.com/huggingface/datasets/issues/6740
6,740
Support for loading geotiff files as a part of the ImageFolder
closed
0
2024-03-18T20:00:39
2024-03-27T18:19:48
2024-03-27T18:19:20
sunny1401
[ "enhancement" ]
### Feature request Request for adding rasterio support to load geotiff as a part of ImageFolder, instead of using PIL ### Motivation As of now, there are many datasets in HuggingFace Hub which are predominantly focussed towards RemoteSensing or are from RemoteSensing. The current ImageFolder (if I have understood c...
false
2,192,730,134
https://api.github.com/repos/huggingface/datasets/issues/6739
https://github.com/huggingface/datasets/pull/6739
6,739
Transpose images with EXIF Orientation tag
closed
3
2024-03-18T16:43:06
2025-07-03T11:33:18
2024-03-19T15:29:42
mariosasko
[]
Closes https://github.com/huggingface/datasets/issues/6252
true
2,192,386,536
https://api.github.com/repos/huggingface/datasets/issues/6738
https://github.com/huggingface/datasets/issues/6738
6,738
Dict feature is non-nullable while nested dict feature is
closed
3
2024-03-18T14:31:47
2024-03-20T10:24:15
2024-03-19T20:05:20
polinaeterna
[ "bug" ]
When i try to create a `Dataset` object with None values inside a dict column, like this: ```python from datasets import Dataset, Features, Value Dataset.from_dict( { "dict": [{"a": 0, "b": 0}, None], }, features=Features( {"dict": {"a": Value("int16"), "b": Value("int16")}} ) ) ...
false
2,190,198,425
https://api.github.com/repos/huggingface/datasets/issues/6737
https://github.com/huggingface/datasets/issues/6737
6,737
Invalid pattern: '**' can only be an entire path component
closed
7
2024-03-16T19:28:46
2024-07-23T14:23:28
2024-05-13T11:32:57
JPonsa
[]
### Describe the bug ValueError: Invalid pattern: '**' can only be an entire path component when loading any dataset ### Steps to reproduce the bug import datasets ds = datasets.load_dataset("TokenBender/code_instructions_122k_alpaca_style") ### Expected behavior loading the dataset successfully ### Environm...
false
2,190,181,422
https://api.github.com/repos/huggingface/datasets/issues/6736
https://github.com/huggingface/datasets/issues/6736
6,736
Mosaic Streaming (MDS) Support
open
1
2024-03-16T18:42:04
2024-03-18T15:13:34
null
siddk
[ "enhancement" ]
### Feature request I'm a huge fan of the current HF Datasets `webdataset` integration (especially the built-in streaming support). However, I'd love to upload some robotics and multimodal datasets I've processed for use with [Mosaic Streaming](https://docs.mosaicml.com/projects/streaming/en/stable/), specifically the...
false
2,189,132,932
https://api.github.com/repos/huggingface/datasets/issues/6735
https://github.com/huggingface/datasets/pull/6735
6,735
Add `mode` parameter to `Image` feature
closed
2
2024-03-15T17:21:12
2024-03-18T15:47:48
2024-03-18T15:41:33
mariosasko
[]
Fix https://github.com/huggingface/datasets/issues/6675
true
2,187,646,694
https://api.github.com/repos/huggingface/datasets/issues/6734
https://github.com/huggingface/datasets/issues/6734
6,734
Tokenization slows towards end of dataset
open
4
2024-03-15T03:27:36
2025-02-20T17:40:54
null
ethansmith2000
[]
### Describe the bug Mapped tokenization slows down substantially towards end of dataset. train set started off very slow, caught up to 20k then tapered off til the end. what's particularly strange is that the tokenization crashed a few times before due to errors with invalid tokens somewhere or corrupted down...
false
2,186,811,724
https://api.github.com/repos/huggingface/datasets/issues/6733
https://github.com/huggingface/datasets/issues/6733
6,733
EmptyDatasetError when loading dataset downloaded with HuggingFace cli
open
1
2024-03-14T16:41:27
2024-03-15T18:09:02
null
StwayneXG
[]
### Describe the bug I am using a cluster that does not have access to the internet when given a job. I tried downloading the dataset using the huggingface-cli command and then loading it with load_dataset but I get an error: ```raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files...
false
2,182,844,673
https://api.github.com/repos/huggingface/datasets/issues/6731
https://github.com/huggingface/datasets/issues/6731
6,731
Unexpected behavior when using load_dataset with streaming=True in a for loop
closed
2
2024-03-12T23:26:43
2024-04-16T00:00:00
2024-04-16T00:00:00
uApiv
[]
### Describe the bug ### My Code ``` from datasets import load_dataset res=[] for i in [0,1]: di=load_dataset( "json", data_files='path_to.json', split='train', streaming=True, ).map(lambda x: {"source": i}) res.append(di) for e in res[...
false
2,181,881,499
https://api.github.com/repos/huggingface/datasets/issues/6730
https://github.com/huggingface/datasets/pull/6730
6,730
Deprecate Pandas builder
closed
2
2024-03-12T15:12:13
2024-03-12T17:42:33
2024-03-12T17:36:24
mariosasko
[]
The Pandas packaged builder is undocumented and relies on `pickle` to read the data, making it **unsafe**. Moreover, I haven't seen a single instance of this builder being used (not even using the GH/Hub search), so we should deprecate it.
true
2,180,237,159
https://api.github.com/repos/huggingface/datasets/issues/6729
https://github.com/huggingface/datasets/issues/6729
6,729
Support zipfiles that span multiple disks?
closed
6
2024-03-11T21:07:41
2024-06-26T05:08:59
2024-06-26T05:05:28
severo
[ "enhancement", "question" ]
See https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream The dataset viewer gives the following error: ``` Error code: ConfigNamesError Exception: BadZipFile Message: zipfiles that span multiple disks are not supported Traceback: Traceback (most recent call last): F...
false
2,178,607,012
https://api.github.com/repos/huggingface/datasets/issues/6728
https://github.com/huggingface/datasets/issues/6728
6,728
Issue Downloading Certain Datasets After Setting Custom `HF_ENDPOINT`
closed
3
2024-03-11T09:06:38
2024-03-15T14:52:07
2024-03-15T14:52:07
padeoe
[]
### Describe the bug This bug is triggered under the following conditions: - datasets repo ids without organization names trigger errors, such as `bookcorpus`, `gsm8k`, `wikipedia`, rather than in the form of `A/B`. - If `HF_ENDPOINT` is set and the hostname is not in the form of `(hub-ci.)?huggingface.co`. - T...
false
2,177,826,110
https://api.github.com/repos/huggingface/datasets/issues/6727
https://github.com/huggingface/datasets/pull/6727
6,727
Using a registry instead of calling globals for fetching feature types
closed
6
2024-03-10T17:47:51
2024-03-13T12:08:49
2024-03-13T10:46:02
psmyth94
[]
Hello, When working with bio-data, each feature often has metadata associated with it (e.g. species, lineage, snp position, etc). To store this, I like to use the feature classes with the added `metadata` attribute. However, when saving or loading with custom features, you get an error since that class doesn't exist...
true
2,177,097,232
https://api.github.com/repos/huggingface/datasets/issues/6726
https://github.com/huggingface/datasets/issues/6726
6,726
Profiling for HF Filesystem shows there are easy performance gains to be made
open
2
2024-03-09T07:08:45
2024-03-09T07:11:08
null
awgr
[]
### Describe the bug # Let's make it faster First, an evidence... ![image](https://github.com/huggingface/datasets/assets/159512661/a703a82c-43a0-426c-9d99-24c563d70965) Figure 1: CProfile for loading 3 files from cerebras/SlimPajama-627B train split, and 3 files from test split using streaming=True. X axis is 1106...
false
2,175,527,530
https://api.github.com/repos/huggingface/datasets/issues/6725
https://github.com/huggingface/datasets/issues/6725
6,725
Request for a comparison of huggingface datasets compared with other data format especially webdataset
open
0
2024-03-08T08:23:01
2024-03-08T08:23:01
null
Luciennnnnnn
[ "enhancement" ]
### Feature request Request for a comparison of huggingface datasets compared with other data format especially webdataset ### Motivation I see huggingface datasets uses Apache Arrow as its backend, it seems to be great, but I'm curious about how it is good compared with other dataset format, like webdataset, what's...
false
2,174,398,227
https://api.github.com/repos/huggingface/datasets/issues/6724
https://github.com/huggingface/datasets/issues/6724
6,724
Dataset with loading script does not work in renamed repos
open
0
2024-03-07T17:38:38
2024-03-07T20:06:25
null
BramVanroy
[]
### Describe the bug My data repository was first called `BramVanroy/hplt-mono-v1-2` but I then renamed to use underscores instead of dashes. However, it seems that `datasets` retrieves the old repo name when it checks whether the repo contains data loading scripts in this line. https://github.com/huggingface/dat...
false
2,174,344,456
https://api.github.com/repos/huggingface/datasets/issues/6723
https://github.com/huggingface/datasets/pull/6723
6,723
get_dataset_default_config_name docstring
closed
2
2024-03-07T17:09:29
2024-03-07T17:27:29
2024-03-07T17:21:20
lhoestq
[]
fix https://github.com/huggingface/datasets/pull/6722
true
2,174,332,127
https://api.github.com/repos/huggingface/datasets/issues/6722
https://github.com/huggingface/datasets/pull/6722
6,722
Add details in docstring
closed
1
2024-03-07T17:02:07
2024-03-07T17:21:10
2024-03-07T17:21:08
severo
[]
see https://github.com/huggingface/datasets-server/pull/2554#discussion_r1516516867
true
2,173,931,714
https://api.github.com/repos/huggingface/datasets/issues/6721
https://github.com/huggingface/datasets/issues/6721
6,721
Hi,do you know how to load the dataset from local file now?
open
3
2024-03-07T13:58:40
2024-03-31T08:09:25
null
Gera001
[]
Hi, if I want to load the dataset from local file, then how to specify the configuration name? _Originally posted by @WHU-gentle in https://github.com/huggingface/datasets/issues/2976#issuecomment-1333455222_
false
2,173,603,459
https://api.github.com/repos/huggingface/datasets/issues/6720
https://github.com/huggingface/datasets/issues/6720
6,720
TypeError: 'str' object is not callable
closed
2
2024-03-07T11:07:09
2024-03-08T07:34:53
2024-03-07T15:13:58
BramVanroy
[]
### Describe the bug I am trying to get the HPLT datasets on the hub. Downloading/re-uploading would be too time- and resource consuming so I wrote [a dataset loader script](https://huggingface.co/datasets/BramVanroy/hplt_mono_v1_2/blob/main/hplt_mono_v1_2.py). I think I am very close but for some reason I always get ...
false
2,169,585,727
https://api.github.com/repos/huggingface/datasets/issues/6719
https://github.com/huggingface/datasets/issues/6719
6,719
Is there any way to solve hanging of IterableDataset using split by node + filtering during inference
open
0
2024-03-05T15:55:13
2024-03-05T15:55:13
null
ssharpe42
[]
### Describe the bug I am using an iterable dataset in a multi-node setup, trying to do training/inference while filtering the data on the fly. I usually do not use `split_dataset_by_node` but it is very slow using the IterableDatasetShard in `accelerate` and `transformers`. When I filter after applying `split_dataset...
false
2,169,468,488
https://api.github.com/repos/huggingface/datasets/issues/6718
https://github.com/huggingface/datasets/pull/6718
6,718
Fix concurrent script loading with force_redownload
closed
2
2024-03-05T15:04:20
2024-03-07T14:05:53
2024-03-07T13:58:04
lhoestq
[]
I added `lock_importable_file` in `get_dataset_builder_class` and `extend_dataset_builder_for_streaming` to fix the issue, and I also added a test cc @clefourrier
true
2,168,726,432
https://api.github.com/repos/huggingface/datasets/issues/6717
https://github.com/huggingface/datasets/issues/6717
6,717
`remove_columns` method used with a streaming enable dataset mode produces a LibsndfileError on multichannel audio
open
2
2024-03-05T09:33:26
2024-08-14T17:54:20
null
jhauret
[]
### Describe the bug When loading a HF dataset in streaming mode and removing some columns, it is impossible to load a sample if the audio contains more than one channel. I have the impression that the time axis and channels are swapped or concatenated. ### Steps to reproduce the bug Minimal error code: ```python ...
false
2,168,706,558
https://api.github.com/repos/huggingface/datasets/issues/6716
https://github.com/huggingface/datasets/issues/6716
6,716
Non-deterministic `Dataset.builder_name` value
closed
6
2024-03-05T09:23:21
2024-03-19T07:58:14
2024-03-19T07:58:14
harupy
[]
### Describe the bug I'm not sure if this is a bug, but `print(ds.builder_name)` in the following code sometimes prints out `rotten_tomatoes` instead of `parquet`: ```python import datasets for _ in range(100): ds = datasets.load_dataset("rotten_tomatoes", split="train") print(ds.builder_name) # pr...
false
2,167,747,095
https://api.github.com/repos/huggingface/datasets/issues/6715
https://github.com/huggingface/datasets/pull/6715
6,715
Fix sliced ConcatenationTable pickling with mixed schemas vertically
closed
2
2024-03-04T21:02:07
2024-03-05T11:23:05
2024-03-05T11:17:04
lhoestq
[]
A sliced + pickled ConcatenationTable could end up with a different schema than the original schema, if the slice only contains blocks with only a subset of the columns. This can lead to issues when saving datasets from a concatenation of datasets with mixed schemas Reported in https://discuss.huggingface.co/t/da...
true
2,167,569,080
https://api.github.com/repos/huggingface/datasets/issues/6714
https://github.com/huggingface/datasets/pull/6714
6,714
Expand no-code dataset info with datasets-server info
closed
2
2024-03-04T19:18:10
2024-03-04T20:28:30
2024-03-04T20:22:15
mariosasko
[]
E.g., to have info about a dataset's number of examples for more informative TQDM bars.
true
2,166,797,560
https://api.github.com/repos/huggingface/datasets/issues/6713
https://github.com/huggingface/datasets/pull/6713
6,713
Bump huggingface-hub lower version to 0.21.2
closed
4
2024-03-04T13:00:52
2024-03-04T18:14:03
2024-03-04T18:06:05
albertvillanova
[]
This should fix the version compatibility issue when using `huggingface_hub` < 0.21.2 and latest fsspec (>=2023.12.0). See my comment: https://github.com/huggingface/datasets/pull/6687#issuecomment-1976493336 >> EDIT: the fix has been released in `huggingface_hub` 0.21.2 - I removed my commits that were using `hugg...
true
2,166,588,373
https://api.github.com/repos/huggingface/datasets/issues/6712
https://github.com/huggingface/datasets/pull/6712
6,712
fix CastError pickling
closed
2
2024-03-04T11:14:18
2024-03-04T20:23:47
2024-03-04T20:17:17
lhoestq
[]
reported in https://discuss.huggingface.co/t/datasetdict-save-to-disk-with-num-proc-1-seems-to-hang-with-error/75595
true
2,165,507,817
https://api.github.com/repos/huggingface/datasets/issues/6711
https://github.com/huggingface/datasets/pull/6711
6,711
3x Faster Text Preprocessing
open
3
2024-03-03T19:03:04
2024-06-26T06:28:14
null
ashvardanian
[]
I was preparing some datasets for AI training and noticed that `datasets` by HuggingFace uses the conventional `open` mechanism to read the file and split it into chunks. I thought it can be significantly accelerated, and [started with a benchmark](https://gist.github.com/ashvardanian/55c2052e9f78b05b8d614aa90cb12347):...
true
2,164,781,564
https://api.github.com/repos/huggingface/datasets/issues/6710
https://github.com/huggingface/datasets/pull/6710
6,710
Persist IterableDataset epoch in workers
closed
2
2024-03-02T12:08:50
2024-07-01T17:51:25
2024-07-01T17:45:30
lhoestq
[]
Use shared memory for the IterableDataset epoch. This way calling `ds.set_epoch()` in the main process will update the epoch in the DataLoader workers as well. This is useful especially because the epoch is used to compute the `effective_seed` used for shuffling. I used torch's shared memory in case users want t...
true
2,164,169,913
https://api.github.com/repos/huggingface/datasets/issues/6709
https://github.com/huggingface/datasets/pull/6709
6,709
set dev version
closed
2
2024-03-01T21:01:14
2024-03-01T21:07:35
2024-03-01T21:01:23
lhoestq
[]
null
true
2,164,158,579
https://api.github.com/repos/huggingface/datasets/issues/6708
https://github.com/huggingface/datasets/pull/6708
6,708
Release: 2.18.0
closed
2
2024-03-01T20:52:17
2024-03-01T21:03:01
2024-03-01T20:56:50
lhoestq
[]
null
true
2,163,799,868
https://api.github.com/repos/huggingface/datasets/issues/6707
https://github.com/huggingface/datasets/pull/6707
6,707
Silence ruff deprecation messages
closed
2
2024-03-01T16:52:29
2024-03-01T17:32:14
2024-03-01T17:25:46
mariosasko
[]
null
true
2,163,783,123
https://api.github.com/repos/huggingface/datasets/issues/6706
https://github.com/huggingface/datasets/pull/6706
6,706
Update ruff
closed
2
2024-03-01T16:44:58
2024-03-01T17:02:13
2024-03-01T16:52:17
lhoestq
[]
null
true
2,163,768,640
https://api.github.com/repos/huggingface/datasets/issues/6705
https://github.com/huggingface/datasets/pull/6705
6,705
Fix data_files when passing data_dir
closed
2
2024-03-01T16:38:53
2024-03-01T18:59:06
2024-03-01T18:52:49
lhoestq
[]
This code should not return empty data files ```python from datasets import load_dataset_builder revision = "3d406e70bc21c3ca92a9a229b4c6fc3ed88279fd" b = load_dataset_builder("bigcode/the-stack-v2-dedup", data_dir="data/Dockerfile", revision=revision) print(b.config.data_files) ``` Previously it would ret...
true
2,163,752,391
https://api.github.com/repos/huggingface/datasets/issues/6704
https://github.com/huggingface/datasets/pull/6704
6,704
Improve default patterns resolution
closed
11
2024-03-01T16:31:25
2024-04-23T09:43:09
2024-03-15T15:22:03
mariosasko
[]
Separate the default patterns that match directories from the ones matching files and ensure directories are checked first (reverts the change from https://github.com/huggingface/datasets/pull/6244, which merged these patterns). Also, ensure that the glob patterns do not overlap to avoid duplicates in the result. A...
true
2,163,250,590
https://api.github.com/repos/huggingface/datasets/issues/6703
https://github.com/huggingface/datasets/issues/6703
6,703
Unable to load dataset that was saved with `save_to_disk`
closed
8
2024-03-01T11:59:56
2024-03-04T13:46:20
2024-03-04T13:46:20
casper-hansen
[]
### Describe the bug I get the following error message: You are trying to load a dataset that was saved using `save_to_disk`. Please use `load_from_disk` instead. ### Steps to reproduce the bug 1. Save a dataset with `save_to_disk` 2. Try to load it with `load_datasets` ### Expected behavior I am ab...
false
2,161,938,484
https://api.github.com/repos/huggingface/datasets/issues/6702
https://github.com/huggingface/datasets/issues/6702
6,702
Push samples to dataset on hub without having the dataset locally
closed
2
2024-02-29T19:17:12
2024-03-08T21:08:38
2024-03-08T21:08:38
jbdel
[ "enhancement" ]
### Feature request Say I have the following code: ``` from datasets import Dataset import pandas as pd new_data = { "column_1": ["value1", "value2"], "column_2": ["value3", "value4"], } df_new = pd.DataFrame(new_data) dataset_new = Dataset.from_pandas(df_new) # add these samples to a remote datase...
false
2,161,448,017
https://api.github.com/repos/huggingface/datasets/issues/6701
https://github.com/huggingface/datasets/pull/6701
6,701
Base parquet batch_size on parquet row group size
closed
2
2024-02-29T14:53:01
2024-02-29T15:15:18
2024-02-29T15:08:55
lhoestq
[]
This allows to stream datasets like [Major-TOM/Core-S2L2A](https://huggingface.co/datasets/Major-TOM/Core-S2L2A) which have row groups with few rows (one row is ~10MB). Previously the cold start would take a lot of time and OOM because it would download many row groups before yielding the first example. I tried on O...
true
2,158,871,038
https://api.github.com/repos/huggingface/datasets/issues/6700
https://github.com/huggingface/datasets/issues/6700
6,700
remove_columns is not in-place but the doc shows it is in-place
closed
3
2024-02-28T12:36:22
2024-04-02T17:15:28
2024-04-02T17:15:28
shelfofclub
[]
### Describe the bug The doc of `datasets` v2.17.0/v2.17.1 shows that `remove_columns` is in-place. [link](https://huggingface.co/docs/datasets/v2.17.1/en/package_reference/main_classes#datasets.DatasetDict.remove_columns) In the text classification example of transformers v4.38.1, the columns are not removed. h...
false
2,158,152,341
https://api.github.com/repos/huggingface/datasets/issues/6699
https://github.com/huggingface/datasets/issues/6699
6,699
`Dataset` unexpected changed dict data and may cause error
open
2
2024-02-28T05:30:10
2024-02-28T19:14:36
null
scruel
[]
### Describe the bug Will unexpected get keys with `None` value in the parsed json dict. ### Steps to reproduce the bug ```jsonl test.jsonl {"id": 0, "indexs": {"-1": [0, 10]}} {"id": 1, "indexs": {"-1": [0, 10]}} ``` ```python dataset = Dataset.from_json('.test.jsonl') print(dataset[0]) ``` Result: ```...
false
2,157,752,392
https://api.github.com/repos/huggingface/datasets/issues/6698
https://github.com/huggingface/datasets/pull/6698
6,698
Faster `xlistdir`
closed
3
2024-02-27T22:55:08
2024-02-27T23:44:49
2024-02-27T23:38:14
mariosasko
[]
Pass `detail=False` to the `fsspec` `listdir` to avoid unnecessarily fetching expensive metadata about the paths.
true
2,157,322,224
https://api.github.com/repos/huggingface/datasets/issues/6697
https://github.com/huggingface/datasets/issues/6697
6,697
Unable to Load Dataset in Kaggle
closed
4
2024-02-27T18:19:34
2024-02-29T17:32:42
2024-02-29T17:32:41
vrunm
[]
### Describe the bug Having installed the latest versions of transformers==4.38.1 and datasets==2.17.1 Unable to load the dataset in a kaggle notebook. Get this Error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recen...
false
2,154,161,357
https://api.github.com/repos/huggingface/datasets/issues/6696
https://github.com/huggingface/datasets/pull/6696
6,696
Make JSON builder support an array of strings
closed
2
2024-02-26T13:18:31
2024-02-28T06:45:23
2024-02-28T06:39:12
albertvillanova
[]
Support JSON file with an array of strings. Fix #6695.
true
2,154,075,509
https://api.github.com/repos/huggingface/datasets/issues/6695
https://github.com/huggingface/datasets/issues/6695
6,695
Support JSON file with an array of strings
closed
1
2024-02-26T12:35:11
2024-03-08T14:16:25
2024-02-28T06:39:13
albertvillanova
[ "enhancement" ]
Support loading a dataset from a JSON file with an array of strings. See: https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1
false
2,153,086,984
https://api.github.com/repos/huggingface/datasets/issues/6694
https://github.com/huggingface/datasets/pull/6694
6,694
__add__ for Dataset, IterableDataset
open
1
2024-02-26T01:46:55
2024-02-29T16:52:58
null
oh-gnues-iohc
[]
It's too cumbersome to write this command every time we perform a dataset merging operation. ```pythonfrom datasets import concatenate_datasets``` We have added a simple `__add__` magic method to each class using `concatenate_datasets.` ```python from datasets import load_dataset bookcorpus = load_dataset("bookc...
true
2,152,887,712
https://api.github.com/repos/huggingface/datasets/issues/6693
https://github.com/huggingface/datasets/pull/6693
6,693
Update the print message for chunked_dataset in process.mdx
closed
2
2024-02-25T18:37:07
2024-02-25T19:57:12
2024-02-25T19:51:02
gzbfgjf2
[]
Update documentation to align with `Dataset.__repr__` change after #423
true
2,152,270,987
https://api.github.com/repos/huggingface/datasets/issues/6692
https://github.com/huggingface/datasets/pull/6692
6,692
Enhancement: Enable loading TSV files in load_dataset()
closed
1
2024-02-24T11:38:59
2024-02-26T15:33:50
2024-02-26T07:14:03
harsh1504660
[]
Fix #6691
true
2,152,134,041
https://api.github.com/repos/huggingface/datasets/issues/6691
https://github.com/huggingface/datasets/issues/6691
6,691
load_dataset() does not support tsv
closed
2
2024-02-24T05:56:04
2024-02-26T07:15:07
2024-02-26T07:09:35
dipsivenkatesh
[ "enhancement" ]
### Feature request the load_dataset() for local functions support file types like csv, json etc but not of type tsv (tab separated values). ### Motivation cant easily load files of type tsv, have to convert them to another type like csv then load ### Your contribution Can try by raising a PR with a little help, c...
false
2,150,800,065
https://api.github.com/repos/huggingface/datasets/issues/6690
https://github.com/huggingface/datasets/issues/6690
6,690
Add function to convert a script-dataset to Parquet
closed
0
2024-02-23T10:28:20
2024-04-12T15:27:05
2024-04-12T15:27:05
albertvillanova
[ "enhancement" ]
Add function to convert a script-dataset to Parquet and push it to the Hub, analogously to the Space: "Convert a Hugging Face dataset to Parquet"
false
2,149,581,147
https://api.github.com/repos/huggingface/datasets/issues/6689
https://github.com/huggingface/datasets/issues/6689
6,689
.load_dataset() method defaults to zstandard
closed
4
2024-02-22T17:39:27
2024-03-07T14:54:16
2024-03-07T14:54:15
ElleLeonne
[]
### Describe the bug Regardless of what method I use, datasets defaults to zstandard for unpacking my datasets. This is poor behavior, because not only is zstandard not a dependency in the huggingface package (and therefore, your dataset loading will be interrupted while it asks you to install the package), but it ...
false
2,148,609,859
https://api.github.com/repos/huggingface/datasets/issues/6688
https://github.com/huggingface/datasets/issues/6688
6,688
Tensor type (e.g. from `return_tensors`) ignored in map
open
3
2024-02-22T09:27:57
2024-02-22T15:56:21
null
srossi93
[]
### Describe the bug I don't know if it is a bug or an expected behavior, but the tensor type seems to be ignored after applying map. For example, mapping over to tokenize text with a transformers' tokenizer always returns lists and it ignore the `return_tensors` argument. If this is an expected behaviour (e.g., fo...
false
2,148,554,178
https://api.github.com/repos/huggingface/datasets/issues/6687
https://github.com/huggingface/datasets/pull/6687
6,687
fsspec: support fsspec>=2023.12.0 glob changes
closed
7
2024-02-22T08:59:32
2024-03-04T12:59:42
2024-02-29T15:12:17
pmrowla
[]
- adds support for the `fs.glob` changes introduced in `fsspec==2023.12.0` and unpins the current upper bound Should close #6644 Should close #6645 The `test_data_files` glob/pattern tests pass for me in: - `fsspec==2023.10.0` (the pinned max version in datasets `main`) - `fsspec==2023.12.0` (#6644) - `fsspec...
true
2,147,795,103
https://api.github.com/repos/huggingface/datasets/issues/6686
https://github.com/huggingface/datasets/issues/6686
6,686
Question: Is there any way for uploading a large image dataset?
open
1
2024-02-21T22:07:21
2024-05-02T03:44:59
null
zhjohnchan
[]
I am uploading an image dataset like this: ``` dataset = load_dataset( "json", data_files={"train": "data/custom_dataset/train.json", "validation": "data/custom_dataset/val.json"}, ) dataset = dataset.cast_column("images", Sequence(Image())) dataset.push_to_hub("StanfordAIMI/custom_dataset", max_shard_si...
false
2,145,570,006
https://api.github.com/repos/huggingface/datasets/issues/6685
https://github.com/huggingface/datasets/pull/6685
6,685
Updated Quickstart Notebook link
closed
2
2024-02-21T01:04:18
2024-03-12T21:31:04
2024-02-25T18:48:08
Codeblockz
[]
Fixed Quickstart Notebook Link in the [Overview notebook](https://github.com/huggingface/datasets/blob/main/notebooks/Overview.ipynb)
true