id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,921,354,680
https://api.github.com/repos/huggingface/datasets/issues/6275
https://github.com/huggingface/datasets/issues/6275
6,275
Would like to Contribute a dataset
closed
1
2023-10-02T07:00:21
2023-10-10T16:27:54
2023-10-10T16:27:54
vikas70607
[]
I have a dataset of 2500 images that can be used for color-blind machine-learning algorithms. Since , there was no dataset available online , I made this dataset myself and would like to contribute this now to community
false
1,921,036,328
https://api.github.com/repos/huggingface/datasets/issues/6274
https://github.com/huggingface/datasets/issues/6274
6,274
FileNotFoundError for dataset with multiple builder config
closed
2
2023-10-01T23:45:56
2024-08-14T04:42:02
2023-10-02T20:09:38
LouisChen15
[]
### Describe the bug When there is only one config and only the dataset name is entered when using datasets.load_dataset(), it works fine. But if I create a second builder_config for my dataset and enter the config name when using datasets.load_dataset(), the following error will happen. FileNotFoundError: [Errno 2...
false
1,920,922,260
https://api.github.com/repos/huggingface/datasets/issues/6273
https://github.com/huggingface/datasets/issues/6273
6,273
Broken Link to PubMed Abstracts dataset .
open
5
2023-10-01T19:08:48
2024-04-28T02:30:42
null
sameemqureshi
[]
### Describe the bug The link provided for the dataset is broken, data_files = [https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst](url) The ### Steps to reproduce the bug Steps to reproduce: 1) Head over to [https://huggingface.co/learn/nlp-course/chapt...
false
1,920,831,487
https://api.github.com/repos/huggingface/datasets/issues/6272
https://github.com/huggingface/datasets/issues/6272
6,272
Duplicate `data_files` when named `<split>/<split>.parquet`
closed
7
2023-10-01T15:43:56
2024-03-15T15:22:05
2024-03-15T15:22:05
lhoestq
[ "bug" ]
e.g. with `u23429/stock_1_minute_ticker` ```ipython In [1]: from datasets import * In [2]: b = load_dataset_builder("u23429/stock_1_minute_ticker") Downloading readme: 100%|██████████████████████████| 627/627 [00:00<00:00, 246kB/s] In [3]: b.config.data_files Out[3]: {NamedSplit('train'): ['hf://datasets/...
false
1,920,420,295
https://api.github.com/repos/huggingface/datasets/issues/6271
https://github.com/huggingface/datasets/issues/6271
6,271
Overwriting Split overwrites data but not metadata, corrupting dataset
closed
0
2023-09-30T22:37:31
2023-10-16T13:30:50
2023-10-16T13:30:50
govindrai
[]
### Describe the bug I want to be able to overwrite/update/delete splits in my dataset. Currently the only way to do is to manually go into the dataset and delete the split. If I try to overwrite programmatically I end up in an error state and (somewhat) corrupting the dataset. Read below. **Current Behavior** Whe...
false
1,920,329,373
https://api.github.com/repos/huggingface/datasets/issues/6270
https://github.com/huggingface/datasets/issues/6270
6,270
Dataset.from_generator raises with sharded gen_args
closed
6
2023-09-30T16:50:06
2023-10-11T20:29:12
2023-10-11T20:29:11
hartmans
[]
### Describe the bug According to the docs of Datasets.from_generator: ``` gen_kwargs(`dict`, *optional*): Keyword arguments to be passed to the `generator` callable. You can define a sharded dataset by passing the list of shards in `gen_kwargs`. ``` So I'd expect that if gen_kwar...
false
1,919,572,790
https://api.github.com/repos/huggingface/datasets/issues/6269
https://github.com/huggingface/datasets/pull/6269
6,269
Reduce the number of commits in `push_to_hub`
closed
21
2023-09-29T16:22:31
2023-10-16T16:03:18
2023-10-16T13:30:46
mariosasko
[]
Reduces the number of commits in `push_to_hub` by using the `preupload` API from https://github.com/huggingface/huggingface_hub/pull/1699. Each commit contains a maximum of 50 uploaded files. A shard's fingerprint no longer needs to be added as a suffix to support resuming an upload, meaning the shards' naming schem...
true
1,919,010,645
https://api.github.com/repos/huggingface/datasets/issues/6268
https://github.com/huggingface/datasets/pull/6268
6,268
Add repo_id to DatasetInfo
open
9
2023-09-29T10:24:55
2023-10-01T15:29:45
null
lhoestq
[]
```python from datasets import load_dataset ds = load_dataset("lhoestq/demo1", split="train") ds = ds.map(lambda x: {}, num_proc=2).filter(lambda x: True).remove_columns(["id"]) print(ds.repo_id) # lhoestq/demo1 ``` - repo_id is None when the dataset doesn't come from the Hub, e.g. from Dataset.from_dict - ...
true
1,916,443,262
https://api.github.com/repos/huggingface/datasets/issues/6267
https://github.com/huggingface/datasets/issues/6267
6,267
Multi label class encoding
open
7
2023-09-27T22:48:08
2023-10-26T18:46:08
null
jmif
[ "enhancement" ]
### Feature request I have a multi label dataset and I'd like to be able to class encode the column and store the mapping directly in the features just as I can with a single label column. `class_encode_column` currently does not support multi labels. Here's an example of what I'd like to encode: ``` data = { ...
false
1,916,334,394
https://api.github.com/repos/huggingface/datasets/issues/6266
https://github.com/huggingface/datasets/pull/6266
6,266
Use LibYAML with PyYAML if available
open
5
2023-09-27T21:13:36
2023-09-28T14:29:24
null
bryant1410
[]
PyYAML, the YAML framework used in this library, allows the use of LibYAML to accelerate the methods `load` and `dump`. To use it, a user would need to first install a PyYAML version that uses LibYAML (not available in PyPI; needs to be manually installed). Then, to actually use them, PyYAML suggests importing the LibY...
true
1,915,651,566
https://api.github.com/repos/huggingface/datasets/issues/6265
https://github.com/huggingface/datasets/pull/6265
6,265
Remove `apache_beam` import in `BeamBasedBuilder._save_info`
closed
4
2023-09-27T13:56:34
2023-09-28T18:34:02
2023-09-28T18:23:35
mariosasko
[]
... to avoid an `ImportError` raised in `BeamBasedBuilder._save_info` when `apache_beam` is not installed (e.g., when downloading the processed version of a dataset from the HF GCS) Fix https://github.com/huggingface/datasets/issues/6260
true
1,914,958,781
https://api.github.com/repos/huggingface/datasets/issues/6264
https://github.com/huggingface/datasets/pull/6264
6,264
Temporarily pin tensorflow < 2.14.0
closed
4
2023-09-27T08:16:06
2023-09-27T08:45:24
2023-09-27T08:36:39
albertvillanova
[]
Temporarily pin tensorflow < 2.14.0 until permanent solution is found. Hot fix #6263.
true
1,914,951,043
https://api.github.com/repos/huggingface/datasets/issues/6263
https://github.com/huggingface/datasets/issues/6263
6,263
CI is broken: ImportError: cannot import name 'context' from 'tensorflow.python'
closed
0
2023-09-27T08:12:05
2023-09-27T08:36:40
2023-09-27T08:36:40
albertvillanova
[ "bug" ]
Python 3.10 CI is broken for `test_py310`. See: https://github.com/huggingface/datasets/actions/runs/6322990957/job/17169678812?pr=6262 ``` FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - ImportError: cannot import name 'context' from 'tensorflow.python' (/opt/hostedtoolcache/Python/3.10.13/x64/li...
false
1,914,895,459
https://api.github.com/repos/huggingface/datasets/issues/6262
https://github.com/huggingface/datasets/pull/6262
6,262
Fix CI 404 errors
closed
9
2023-09-27T07:40:18
2023-09-28T15:39:16
2023-09-28T15:30:40
albertvillanova
[]
Currently our CI usually raises 404 errors when trying to delete temporary repositories. See, e.g.: https://github.com/huggingface/datasets/actions/runs/6314980985/job/17146507884 ``` FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files_with_max_shard_size - huggingface_hub.u...
true
1,913,813,178
https://api.github.com/repos/huggingface/datasets/issues/6261
https://github.com/huggingface/datasets/issues/6261
6,261
Can't load a dataset
closed
5
2023-09-26T15:46:25
2023-10-05T10:23:23
2023-10-05T10:23:22
joaopedrosdmm
[]
### Describe the bug Can't seem to load the JourneyDB dataset. It throws the following error: ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) Cell In[15], line 2 1 # If the dataset is gated/priv...
false
1,912,593,466
https://api.github.com/repos/huggingface/datasets/issues/6260
https://github.com/huggingface/datasets/issues/6260
6,260
REUSE_DATASET_IF_EXISTS don't work
closed
3
2023-09-26T03:02:16
2023-09-28T18:23:36
2023-09-28T18:23:36
rangehow
[]
### Describe the bug I use the following code to download natural_question dataset. Even though I have completely download it, the next time I run this code, the new download procedure will start and cover the original /data/lxy/NQ config=datasets.DownloadConfig(resume_download=True,max_retries=100,cache_dir=r'/da...
false
1,911,965,758
https://api.github.com/repos/huggingface/datasets/issues/6259
https://github.com/huggingface/datasets/issues/6259
6,259
Duplicated Rows When Loading Parquet Files from Root Directory with Subdirectories
closed
1
2023-09-25T17:20:54
2024-03-15T15:22:04
2024-03-15T15:22:04
MF-FOOM
[]
### Describe the bug When parquet files are saved in "train" and "val" subdirectories under a root directory, and datasets are then loaded using `load_dataset("parquet", data_dir="root_directory")`, the resulting dataset has duplicated rows for both the training and validation sets. ### Steps to reproduce the bug...
false
1,911,445,373
https://api.github.com/repos/huggingface/datasets/issues/6258
https://github.com/huggingface/datasets/pull/6258
6,258
[DOCS] Fix typo: Elasticsearch
closed
2
2023-09-25T12:50:59
2023-09-26T14:55:35
2023-09-26T13:36:40
leemthompo
[]
Not ElasticSearch :)
true
1,910,741,044
https://api.github.com/repos/huggingface/datasets/issues/6257
https://github.com/huggingface/datasets/issues/6257
6,257
HfHubHTTPError - exceeded our hourly quotas for action: commit
closed
4
2023-09-25T06:11:43
2023-10-16T13:30:49
2023-10-16T13:30:48
yuvalkirstain
[]
### Describe the bug I try to upload a very large dataset of images, and get the following error: ``` File /fsx-multigen/yuvalkirstain/miniconda/envs/pickapic/lib/python3.10/site-packages/huggingface_hub/hf_api.py:2712, in HfApi.create_commit(self, repo_id, operations, commit_message, commit_description, token, repo...
false
1,910,275,199
https://api.github.com/repos/huggingface/datasets/issues/6256
https://github.com/huggingface/datasets/issues/6256
6,256
load_dataset() function's cache_dir does not seems to work
closed
8
2023-09-24T15:34:06
2025-05-14T10:08:53
2024-10-08T15:45:18
andyzhu
[]
### Describe the bug datasets version: 2.14.5 when trying to run the following command trec = load_dataset('trec', split='train[:1000]', cache_dir='/path/to/my/dir') I keep getting error saying the command does not have permission to the default cache directory on my macbook pro machine. It seems the cache_...
false
1,909,842,977
https://api.github.com/repos/huggingface/datasets/issues/6255
https://github.com/huggingface/datasets/pull/6255
6,255
Parallelize builder configs creation
closed
5
2023-09-23T11:56:20
2024-01-11T06:32:34
2023-09-26T15:44:19
lhoestq
[]
For datasets with lots of configs defined in YAML E.g. `load_dataset("uonlp/CulturaX", "fr", revision="refs/pr/6")` from >1min to 15sec
true
1,909,672,104
https://api.github.com/repos/huggingface/datasets/issues/6254
https://github.com/huggingface/datasets/issues/6254
6,254
Dataset.from_generator() cost much more time in vscode debugging mode then running mode
closed
1
2023-09-23T02:07:26
2023-10-03T14:42:53
2023-10-03T14:42:53
dontnet-wuenze
[]
### Describe the bug Hey there, I’m using Dataset.from_generator() to convert a torch_dataset to the Huggingface Dataset. However, when I debug my code on vscode, I find that it runs really slow on Dataset.from_generator() which may even 20 times longer then run the script on terminal. ### Steps to reproduce the bu...
false
1,906,618,910
https://api.github.com/repos/huggingface/datasets/issues/6253
https://github.com/huggingface/datasets/pull/6253
6,253
Check builder cls default config name in inspect
closed
4
2023-09-21T10:15:32
2023-09-21T14:16:44
2023-09-21T14:08:00
lhoestq
[]
Fix https://github.com/huggingface/datasets-server/issues/1812 this was causing this issue: ```ipython In [1]: from datasets import * In [2]: inspect.get_dataset_config_names("aakanksha/udpos") Out[2]: ['default'] In [3]: load_dataset_builder("aakanksha/udpos").config.name Out[3]: 'en' ```
true
1,906,375,378
https://api.github.com/repos/huggingface/datasets/issues/6252
https://github.com/huggingface/datasets/issues/6252
6,252
exif_transpose not done to Image (PIL problem)
closed
2
2023-09-21T08:11:46
2024-03-19T15:29:43
2024-03-19T15:29:43
rhajou
[ "enhancement" ]
### Feature request I noticed that some of my images loaded using PIL have some metadata related to exif that can rotate them when loading. Since the dataset.features.Image uses PIL for loading, the loaded image may be rotated (width and height will be inverted) thus for tasks as object detection and layoutLM this ca...
false
1,904,418,426
https://api.github.com/repos/huggingface/datasets/issues/6251
https://github.com/huggingface/datasets/pull/6251
6,251
Support streaming datasets with pyarrow.parquet.read_table
closed
10
2023-09-20T08:07:02
2023-09-27T06:37:03
2023-09-27T06:26:24
albertvillanova
[]
Support streaming datasets with `pyarrow.parquet.read_table`. See: https://huggingface.co/datasets/uonlp/CulturaX/discussions/2 CC: @AndreaFrancis
true
1,901,390,945
https://api.github.com/repos/huggingface/datasets/issues/6247
https://github.com/huggingface/datasets/pull/6247
6,247
Update create_dataset.mdx
closed
2
2023-09-18T17:06:29
2023-09-19T18:51:49
2023-09-19T18:40:10
EswarDivi
[]
modified , as AudioFolder and ImageFolder not in Dataset Library. ``` from datasets import AudioFolder ``` and ```from datasets import ImageFolder``` to ```from datasets import load_dataset``` ``` cannot import name 'AudioFolder' from 'datasets' (/home/eswardivi/miniconda3/envs/Hugformers/lib/python3.10/site...
true
1,899,848,414
https://api.github.com/repos/huggingface/datasets/issues/6246
https://github.com/huggingface/datasets/issues/6246
6,246
Add new column to dataset
closed
4
2023-09-17T16:59:48
2023-09-18T16:20:09
2023-09-18T16:20:09
andysingal
[]
### Describe the bug ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [<ipython-input-9-bd197b36b6a0>](https://localhost:8080/#) in <cell line: 1>() ----> 1 dataset['train']['/workspace/data'] 3 frames [/...
false
1,898,861,422
https://api.github.com/repos/huggingface/datasets/issues/6244
https://github.com/huggingface/datasets/pull/6244
6,244
Add support for `fsspec>=2023.9.0`
closed
19
2023-09-15T17:58:25
2023-09-26T15:41:38
2023-09-26T15:32:51
mariosasko
[]
Fix #6214
true
1,898,532,784
https://api.github.com/repos/huggingface/datasets/issues/6243
https://github.com/huggingface/datasets/pull/6243
6,243
Fix cast from fixed size list to variable size list
closed
6
2023-09-15T14:23:33
2023-09-19T18:02:21
2023-09-19T17:53:17
mariosasko
[]
Fix #6242
true
1,896,899,123
https://api.github.com/repos/huggingface/datasets/issues/6242
https://github.com/huggingface/datasets/issues/6242
6,242
Data alteration when loading dataset with unspecified inner sequence length
closed
2
2023-09-14T16:12:45
2023-09-19T17:53:18
2023-09-19T17:53:18
qgallouedec
[]
### Describe the bug When a dataset saved with a specified inner sequence length is loaded without specifying that length, the original data is altered and becomes inconsistent. ### Steps to reproduce the bug ```python from datasets import Dataset, Features, Value, Sequence, load_dataset # Repository ID repo_id...
false
1,896,429,694
https://api.github.com/repos/huggingface/datasets/issues/6241
https://github.com/huggingface/datasets/pull/6241
6,241
Remove unused global variables in `audio.py`
closed
4
2023-09-14T12:06:32
2023-09-15T15:57:10
2023-09-15T15:46:07
mariosasko
[]
null
true
1,895,723,888
https://api.github.com/repos/huggingface/datasets/issues/6240
https://github.com/huggingface/datasets/issues/6240
6,240
Dataloader stuck on multiple GPUs
closed
2
2023-09-14T05:30:30
2023-09-14T23:54:42
2023-09-14T23:54:42
kuri54
[]
### Describe the bug I am trying to get CLIP to fine-tuning with my code. When I tried to run it on multiple GPUs using accelerate, I encountered the following phenomenon. - Validation dataloader stuck in 2nd epoch only on multi-GPU Specifically, when the "for inputs in valid_loader:" process is finished, it does...
false
1,895,349,382
https://api.github.com/repos/huggingface/datasets/issues/6239
https://github.com/huggingface/datasets/issues/6239
6,239
Load local audio data doesn't work
closed
2
2023-09-13T22:30:01
2023-09-15T14:32:10
2023-09-15T14:32:10
abodacs
[]
### Describe the bug I get a RuntimeError from the following code: ```python audio_dataset = Dataset.from_dict({"audio": ["/kaggle/input/bengaliai-speech/train_mp3s/000005f3362c.mp3"]}).cast_column("audio", Audio()) audio_dataset[0] ``` ### Traceback <details> ```python RuntimeError ...
false
1,895,207,828
https://api.github.com/repos/huggingface/datasets/issues/6238
https://github.com/huggingface/datasets/issues/6238
6,238
`dataset.filter` ALWAYS removes the first item from the dataset when using batched=True
closed
2
2023-09-13T20:20:37
2023-09-17T07:05:07
2023-09-17T07:05:07
Taytay
[]
### Describe the bug If you call batched=True when calling `filter`, the first item is _always_ filtered out, regardless of the filter condition. ### Steps to reproduce the bug Here's a minimal example: ```python def filter_batch_always_true(batch, indices): print("First index being passed into this filte...
false
1,893,822,321
https://api.github.com/repos/huggingface/datasets/issues/6237
https://github.com/huggingface/datasets/issues/6237
6,237
Tokenization with multiple workers is too slow
closed
1
2023-09-13T06:18:34
2023-09-19T21:54:58
2023-09-19T21:54:58
macabdul9
[]
I am trying to tokenize a few million documents with multiple workers but the tokenization process is taking forever. Code snippet: ``` raw_datasets.map( encode_function, batched=False, num_proc=args.preprocessing_num_workers, load_from_cache_file=not args.ove...
false
1,893,648,480
https://api.github.com/repos/huggingface/datasets/issues/6236
https://github.com/huggingface/datasets/issues/6236
6,236
Support buffer shuffle for to_tf_dataset
open
3
2023-09-13T03:19:44
2023-09-18T01:11:21
null
EthanRock
[ "enhancement" ]
### Feature request I'm using to_tf_dataset to convert a large dataset to tf.data.Dataset and use Keras fit to train model. Currently, to_tf_dataset only supports full size shuffle, which can be very slow on large dataset. tf.data.Dataset support buffer shuffle by default. shuffle( buffer_size, seed=None, r...
false
1,893,337,083
https://api.github.com/repos/huggingface/datasets/issues/6235
https://github.com/huggingface/datasets/issues/6235
6,235
Support multiprocessing for download/extract nestedly
open
0
2023-09-12T21:51:08
2023-09-12T21:51:08
null
hgt312
[ "enhancement" ]
### Feature request Current multiprocessing for download/extract is not done nestedly. For example, when processing SlimPajama, there is only 3 processes (for train/test/val), while there are many files inside these 3 folders ``` Downloading data files #0: 0%| | 0/1 [00:00<?, ?obj/s] Downloading data f...
false
1,891,804,286
https://api.github.com/repos/huggingface/datasets/issues/6233
https://github.com/huggingface/datasets/pull/6233
6,233
Update README.md
closed
2
2023-09-12T06:53:06
2023-09-13T18:20:50
2023-09-13T18:10:04
NinoRisteski
[]
fixed a typo
true
1,891,109,762
https://api.github.com/repos/huggingface/datasets/issues/6232
https://github.com/huggingface/datasets/pull/6232
6,232
Improve error message for missing function parameters
closed
3
2023-09-11T19:11:58
2023-09-15T18:07:56
2023-09-15T17:59:02
suavemint
[]
The error message in the fingerprint module was missing the f-string 'f' symbol, so the error message returned by fingerprint.py, line 469 was literally "function {func} is missing parameters {fingerprint_names} in signature." This has been fixed.
true
1,890,863,249
https://api.github.com/repos/huggingface/datasets/issues/6231
https://github.com/huggingface/datasets/pull/6231
6,231
Overwrite legacy default config name in `dataset_infos.json` in packaged datasets
open
9
2023-09-11T16:27:09
2023-09-26T11:19:36
null
polinaeterna
[]
Currently if we push data as default config with `.push_to_hub` to a repo that has a legacy `dataset_infos.json` file containing a legacy default config name like `{username}--{dataset_name}`, new key `"default"` is added to `dataset_infos.json` along with the legacy one. I think the legacy one should be dropped in thi...
true
1,890,521,006
https://api.github.com/repos/huggingface/datasets/issues/6230
https://github.com/huggingface/datasets/pull/6230
6,230
Don't skip hidden files in `dl_manager.iter_files` when they are given as input
closed
4
2023-09-11T13:29:19
2023-09-13T18:21:28
2023-09-13T18:12:09
mariosasko
[]
Required for `load_dataset(<format>, data_files=["path/to/.hidden_file"])` to work as expected
true
1,889,050,954
https://api.github.com/repos/huggingface/datasets/issues/6229
https://github.com/huggingface/datasets/issues/6229
6,229
Apply inference on all images in the dataset
closed
3
2023-09-10T08:36:12
2023-09-20T16:11:53
2023-09-20T16:11:52
andysingal
[]
### Describe the bug ``` --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) Cell In[14], line 11 9 for idx, example in enumerate(dataset['train']): 10 image_path = example['image'] ---> 11 mask...
false
1,887,959,311
https://api.github.com/repos/huggingface/datasets/issues/6228
https://github.com/huggingface/datasets/pull/6228
6,228
Remove RGB -> BGR image conversion in Object Detection tutorial
closed
3
2023-09-08T16:09:13
2023-09-08T18:02:49
2023-09-08T17:52:16
mariosasko
[]
Fix #6225
true
1,887,462,591
https://api.github.com/repos/huggingface/datasets/issues/6226
https://github.com/huggingface/datasets/pull/6226
6,226
Add push_to_hub with multiple configs docs
closed
3
2023-09-08T11:08:55
2023-09-08T12:29:21
2023-09-08T12:20:51
lhoestq
[]
null
true
1,887,054,320
https://api.github.com/repos/huggingface/datasets/issues/6225
https://github.com/huggingface/datasets/issues/6225
6,225
Conversion from RGB to BGR in Object Detection tutorial
closed
1
2023-09-08T06:49:19
2023-09-08T17:52:18
2023-09-08T17:52:17
samokhinv
[]
The [tutorial](https://huggingface.co/docs/datasets/main/en/object_detection) mentions the necessity of conversion the input image from BGR to RGB > albumentations expects the image to be in BGR format, not RGB, so you’ll have to convert the image before applying the transform. [Link to tutorial](https://github.c...
false
1,886,043,692
https://api.github.com/repos/huggingface/datasets/issues/6224
https://github.com/huggingface/datasets/pull/6224
6,224
Ignore `dataset_info.json` in data files resolution
closed
3
2023-09-07T14:43:51
2023-09-07T15:46:10
2023-09-07T15:37:20
mariosasko
[]
`save_to_disk` creates this file, but also [`HugginFaceDatasetSever`](https://github.com/gradio-app/gradio/blob/26fef8c7f85a006c7e25cdbed1792df19c512d02/gradio/flagging.py#L214), so this is needed to avoid issues such as [this one](https://discord.com/channels/879548962464493619/1149295819938349107/1149295819938349107)...
true
1,885,710,696
https://api.github.com/repos/huggingface/datasets/issues/6223
https://github.com/huggingface/datasets/pull/6223
6,223
Update README.md
closed
2
2023-09-07T11:33:20
2023-09-13T22:32:31
2023-09-13T22:23:42
NinoRisteski
[]
fixed a few typos
true
1,884,875,510
https://api.github.com/repos/huggingface/datasets/issues/6222
https://github.com/huggingface/datasets/pull/6222
6,222
fix typo in Audio dataset documentation
closed
2
2023-09-06T23:17:24
2023-10-03T14:18:41
2023-09-07T15:39:09
prassanna-ravishankar
[]
There is a typo in the section of the documentation dedicated to creating an audio dataset. The Dataset is incorrectly suffixed with a `Config` https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia/blob/main/librivox-indonesia.py#L59
true
1,884,324,631
https://api.github.com/repos/huggingface/datasets/issues/6221
https://github.com/huggingface/datasets/issues/6221
6,221
Support saving datasets with custom formatting
open
1
2023-09-06T16:03:32
2023-09-06T18:32:07
null
mariosasko
[]
Requested in https://discuss.huggingface.co/t/using-set-transform-on-a-dataset-leads-to-an-exception/53036. I am not sure if supporting this is the best idea for the following reasons: >For this to work, we would have to pickle a custom transform, which means the transform and the objects it references need to be...
false
1,884,285,980
https://api.github.com/repos/huggingface/datasets/issues/6220
https://github.com/huggingface/datasets/pull/6220
6,220
Set dev version
closed
3
2023-09-06T15:40:33
2023-09-06T15:52:33
2023-09-06T15:41:13
albertvillanova
[]
null
true
1,884,244,334
https://api.github.com/repos/huggingface/datasets/issues/6219
https://github.com/huggingface/datasets/pull/6219
6,219
Release: 2.14.5
closed
4
2023-09-06T15:17:10
2023-09-06T15:46:20
2023-09-06T15:18:51
albertvillanova
[]
null
true
1,883,734,000
https://api.github.com/repos/huggingface/datasets/issues/6218
https://github.com/huggingface/datasets/pull/6218
6,218
Rename old push_to_hub configs to "default" in dataset_infos
closed
8
2023-09-06T10:40:05
2023-09-07T08:31:29
2023-09-06T11:23:56
lhoestq
[]
Fix ```python from datasets import load_dataset_builder b = load_dataset_builder("lambdalabs/pokemon-blip-captions", "default") print(b.info) ``` which should return ``` DatasetInfo( features={'image': Image(decode=True, id=None), 'text': Value(dtype='string', id=None)}, dataset_name='pokemon-bli...
true
1,883,614,607
https://api.github.com/repos/huggingface/datasets/issues/6217
https://github.com/huggingface/datasets/issues/6217
6,217
`Dataset.to_dict()` ignore `decode=True` with Image feature
open
1
2023-09-06T09:26:16
2023-09-08T17:08:52
null
qgallouedec
[]
### Describe the bug `Dataset.to_dict` seems to ignore the decoding instruction passed in features. ### Steps to reproduce the bug ```python import datasets import numpy as np from PIL import Image img = np.random.randint(0, 256, (5, 5, 3), dtype=np.uint8) img = Image.fromarray(img) features = datasets.Fea...
false
1,883,492,703
https://api.github.com/repos/huggingface/datasets/issues/6216
https://github.com/huggingface/datasets/pull/6216
6,216
Release: 2.13.2
closed
5
2023-09-06T08:15:32
2023-09-06T08:52:18
2023-09-06T08:22:43
albertvillanova
[]
null
true
1,882,176,970
https://api.github.com/repos/huggingface/datasets/issues/6215
https://github.com/huggingface/datasets/pull/6215
6,215
Fix checking patterns to infer packaged builder
closed
3
2023-09-05T15:10:47
2023-09-06T10:34:00
2023-09-06T10:25:00
polinaeterna
[]
Don't ignore results of pattern resolving if `self.data_files` is not None. Otherwise lines 854 and 1037 make no sense.
true
1,881,736,469
https://api.github.com/repos/huggingface/datasets/issues/6214
https://github.com/huggingface/datasets/issues/6214
6,214
Unpin fsspec < 2023.9.0
closed
0
2023-09-05T11:02:58
2023-09-26T15:32:52
2023-09-26T15:32:52
albertvillanova
[ "enhancement" ]
Once root issue is fixed, remove temporary pin of fsspec < 2023.9.0 introduced by: - #6210 Related to issue: - #6209 After investigation, I think the root issue is related to the new glob behavior with double asterisk `**` they have introduced in: - https://github.com/fsspec/filesystem_spec/pull/1329
false
1,880,592,987
https://api.github.com/repos/huggingface/datasets/issues/6213
https://github.com/huggingface/datasets/pull/6213
6,213
Better list array values handling in cast/embed storage
closed
5
2023-09-04T16:21:23
2024-01-11T06:32:20
2023-10-05T15:24:34
mariosasko
[]
Use [`array.flatten`](https://arrow.apache.org/docs/python/generated/pyarrow.ListArray.html#pyarrow.ListArray.flatten) that takes `.offset` into account instead of `array.values` in array cast/embed.
true
1,880,399,516
https://api.github.com/repos/huggingface/datasets/issues/6212
https://github.com/huggingface/datasets/issues/6212
6,212
Tilde (~) is not supported for data_files
open
2
2023-09-04T14:23:49
2023-09-05T08:28:39
null
exs-avianello
[]
### Describe the bug Attempting to `load_dataset` from a path starting with `~` (as a shorthand for the user's home directory) seems not to be fully working - at least as far as the `parquet` dataset builder is concerned. (the same file can be loaded correctly if providing its absolute path instead) I think that...
false
1,880,265,906
https://api.github.com/repos/huggingface/datasets/issues/6211
https://github.com/huggingface/datasets/pull/6211
6,211
Fix empty splitinfo json
closed
4
2023-09-04T13:13:53
2023-09-04T14:58:34
2023-09-04T14:47:17
lhoestq
[]
If a split is empty, then the JSON split info should mention num_bytes = 0 and num_examples = 0. Until now they were omited because the JSON dumps ignore the fields that are equal to the default values. This is needed in datasets-server since we parse this information to the viewer
true
1,879,649,731
https://api.github.com/repos/huggingface/datasets/issues/6210
https://github.com/huggingface/datasets/pull/6210
6,210
Temporarily pin fsspec < 2023.9.0
closed
3
2023-09-04T07:07:07
2023-09-04T07:40:23
2023-09-04T07:30:00
albertvillanova
[]
Temporarily pin fsspec < 2023.9.0 until permanent solution is found. Hot fix #6209.
true
1,879,622,000
https://api.github.com/repos/huggingface/datasets/issues/6209
https://github.com/huggingface/datasets/issues/6209
6,209
CI is broken with AssertionError: 3 failed, 12 errors
closed
0
2023-09-04T06:47:05
2023-09-04T07:30:01
2023-09-04T07:30:01
albertvillanova
[ "bug" ]
Our CI is broken: 3 failed, 12 errors See: https://github.com/huggingface/datasets/actions/runs/6069947111/job/16465138041 ``` =========================== short test summary info ============================ FAILED tests/test_load.py::ModuleFactoryTest::test_LocalDatasetModuleFactoryWithoutScript_with_data_dir - ...
false
1,879,572,646
https://api.github.com/repos/huggingface/datasets/issues/6208
https://github.com/huggingface/datasets/pull/6208
6,208
Do not filter out .zip extensions from no-script datasets
closed
6
2023-09-04T06:07:12
2023-09-04T09:22:19
2023-09-04T09:13:32
albertvillanova
[]
This PR is a hotfix of: - #6207 That PR introduced the filtering out of `.zip` extensions. This PR reverts that. Hot fix #6207. Maybe we should do patch releases: the bug was introduced in 2.13.1. CC: @lhoestq
true
1,879,555,234
https://api.github.com/repos/huggingface/datasets/issues/6207
https://github.com/huggingface/datasets/issues/6207
6,207
No-script datasets with ZIP files do not load
closed
0
2023-09-04T05:50:27
2023-09-04T09:13:33
2023-09-04T09:13:33
albertvillanova
[ "bug" ]
While investigating an issue on a Hub dataset, I have discovered the no-script datasets containing ZIP files do not load. For example, that no-script dataset containing ZIP files, raises NonMatchingSplitsSizesError: ```python In [2]: ds = load_dataset("sidovic/LearningQ-qg") NonMatchingSplitsSizesError: [ { ...
false
1,879,473,745
https://api.github.com/repos/huggingface/datasets/issues/6206
https://github.com/huggingface/datasets/issues/6206
6,206
When calling load_dataset, raise error: pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
closed
2
2023-09-04T04:14:00
2024-04-17T15:53:29
2023-09-04T06:05:49
aihao2000
[]
### Describe the bug When calling load_dataset, raise error ``` Traceback (most recent call last): File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py", line 1694, in _pre pare_split_single ...
false
1,877,491,602
https://api.github.com/repos/huggingface/datasets/issues/6203
https://github.com/huggingface/datasets/issues/6203
6,203
Support loading from a DVC remote repository
closed
4
2023-09-01T14:04:52
2023-09-15T15:11:27
2023-09-15T15:11:27
bilelomrani1
[ "enhancement" ]
### Feature request Adding support for loading a file from a DVC repository, tracked remotely on a SCM. ### Motivation DVC is a popular version control system to version and manage datasets. The files are stored on a remote object storage platform, but they are tracked using Git. Integration with DVC is possible thr...
false
1,876,630,351
https://api.github.com/repos/huggingface/datasets/issues/6202
https://github.com/huggingface/datasets/issues/6202
6,202
avoid downgrading jax version
closed
1
2023-09-01T02:57:57
2023-10-12T16:28:59
2023-10-12T16:28:59
chrisflesher
[ "enhancement" ]
### Feature request Whenever I `pip install datasets[jax]` it downgrades jax to version 0.3.25. I seem to be able to install this library first then upgrade jax back to version 0.4.13. ### Motivation It would be nice to not overwrite currently installed version of jax if possible. ### Your contribution I...
false
1,875,256,775
https://api.github.com/repos/huggingface/datasets/issues/6201
https://github.com/huggingface/datasets/pull/6201
6,201
Fix to_json ValueError and remove pandas pin
closed
4
2023-08-31T10:38:08
2023-09-05T11:07:07
2023-09-05T10:58:21
albertvillanova
[]
This PR fixes the root cause of the issue: - #6197 This PR also removes the temporary pin of `pandas` introduced by: - #6200 Note that for orient in ['records', 'values'], index value is ignored but - in `pandas` < 2.1.0, a ValueError is raised if not index and orient not in ['split', 'table'] - for orien...
true
1,875,169,551
https://api.github.com/repos/huggingface/datasets/issues/6200
https://github.com/huggingface/datasets/pull/6200
6,200
Temporarily pin pandas < 2.1.0
closed
3
2023-08-31T09:45:17
2023-08-31T10:33:24
2023-08-31T10:24:38
albertvillanova
[]
Temporarily pin `pandas` < 2.1.0 until permanent solution is found. Hot fix #6197.
true
1,875,165,185
https://api.github.com/repos/huggingface/datasets/issues/6199
https://github.com/huggingface/datasets/issues/6199
6,199
Use load_dataset for local json files, but it not works
open
2
2023-08-31T09:42:34
2023-08-31T19:05:07
null
Garen-in-bush
[]
### Describe the bug when I use load_dataset to load my local datasets,it always goes to Hugging Face to download the data instead of loading the local dataset. ### Steps to reproduce the bug `raw_datasets = load_dataset( ‘json’, data_files=data_files)` ### Expected behavior ![image](https://gi...
false
1,875,092,027
https://api.github.com/repos/huggingface/datasets/issues/6198
https://github.com/huggingface/datasets/pull/6198
6,198
Preserve split order in DataFilesDict
closed
4
2023-08-31T09:00:26
2023-08-31T13:57:31
2023-08-31T13:48:42
albertvillanova
[]
After investigation, I have found that this copy forces the splits to be sorted alphabetically: https://github.com/huggingface/datasets/blob/029227a116c14720afca71b9b22e78eb2a1c09a6/src/datasets/builder.py#L556 This PR removes the alphabetically sort of `DataFilesDict` keys. - Note that for a `dict`, the order of k...
true
1,875,078,155
https://api.github.com/repos/huggingface/datasets/issues/6197
https://github.com/huggingface/datasets/issues/6197
6,197
ValueError: 'index=True' is only valid when 'orient' is 'split', 'table', 'index', or 'columns'
closed
3
2023-08-31T08:51:50
2023-09-01T10:35:10
2023-08-31T10:24:40
exs-avianello
[]
### Describe the bug Saving a dataset `.to_json()` fails with a `ValueError` since the latest `pandas` [release](https://pandas.pydata.org/docs/dev/whatsnew/v2.1.0.html) (`2.1.0`) In their latest release we have: > Improved error handling when using [DataFrame.to_json()](https://pandas.pydata.org/docs/dev/refere...
false
1,875,070,972
https://api.github.com/repos/huggingface/datasets/issues/6196
https://github.com/huggingface/datasets/issues/6196
6,196
Split order is not preserved
closed
0
2023-08-31T08:47:16
2023-08-31T13:48:43
2023-08-31T13:48:43
albertvillanova
[ "bug" ]
I have noticed that in some cases the split order is not preserved. For example, consider a no-script dataset with configs: ```yaml configs: - config_name: default data_files: - split: train path: train.csv - split: test path: test.csv ``` - Note the defined split order is [train, test] On...
false
1,874,195,585
https://api.github.com/repos/huggingface/datasets/issues/6195
https://github.com/huggingface/datasets/issues/6195
6,195
Force to reuse cache at given path
closed
2
2023-08-30T18:44:54
2023-11-03T10:14:21
2023-08-30T19:00:45
Luosuu
[]
### Describe the bug I have run the official example of MLM like: ```bash python run_mlm.py \ --model_name_or_path roberta-base \ --dataset_name togethercomputer/RedPajama-Data-1T \ --dataset_config_name arxiv \ --per_device_train_batch_size 10 \ --preprocessing_num_workers 20 ...
false
1,872,598,223
https://api.github.com/repos/huggingface/datasets/issues/6194
https://github.com/huggingface/datasets/issues/6194
6,194
Support custom fingerprinting with `Dataset.from_generator`
open
7
2023-08-29T22:43:13
2024-12-22T01:14:39
null
bilelomrani1
[ "enhancement" ]
### Feature request When using `Dataset.from_generator`, the generator is hashed when building the fingerprint. Similar to `.map`, it would be interesting to let the user bypass this hashing by accepting a `fingerprint` argument to `.from_generator`. ### Motivation Using the `.from_generator` constructor with ...
false
1,872,285,153
https://api.github.com/repos/huggingface/datasets/issues/6193
https://github.com/huggingface/datasets/issues/6193
6,193
Dataset loading script method does not work with .pyc file
open
3
2023-08-29T19:35:06
2023-08-31T19:47:29
null
riteshkumarumassedu
[]
### Describe the bug The huggingface dataset library specifically looks for ‘.py’ file while loading the dataset using loading script approach and it does not work with ‘.pyc’ file. While deploying in production, it becomes an issue when we are restricted to use only .pyc files. Is there any work around for this ? #...
false
1,871,911,640
https://api.github.com/repos/huggingface/datasets/issues/6192
https://github.com/huggingface/datasets/pull/6192
6,192
Set minimal fsspec version requirement to 2023.1.0
closed
5
2023-08-29T15:23:41
2023-08-30T14:01:56
2023-08-30T13:51:32
mariosasko
[]
Fix https://github.com/huggingface/datasets/issues/6141 Colab installs 2023.6.0, so we should be good 🙂
true
1,871,634,840
https://api.github.com/repos/huggingface/datasets/issues/6191
https://github.com/huggingface/datasets/pull/6191
6,191
Add missing `revision` argument
closed
4
2023-08-29T13:05:04
2023-09-04T06:38:17
2023-08-31T13:50:00
qgallouedec
[]
I've noticed that when you're not working on the main branch, there are sometimes errors in the files returned. After some investigation, I realized that the revision was not properly passed everywhere. This PR proposes a fix.
true
1,871,582,175
https://api.github.com/repos/huggingface/datasets/issues/6190
https://github.com/huggingface/datasets/issues/6190
6,190
`Invalid user token` even when correct user token is passed!
closed
2
2023-08-29T12:37:03
2023-08-29T13:01:10
2023-08-29T13:01:09
Vaibhavs10
[]
### Describe the bug I'm working on a dataset which comprises other datasets on the hub. URL: https://huggingface.co/datasets/open-asr-leaderboard/datasets-test-only Note: Some of the sub-datasets in this metadataset require explicit access. All the other datasets work fine, except, `common_voice`. ### Steps t...
false
1,871,569,855
https://api.github.com/repos/huggingface/datasets/issues/6189
https://github.com/huggingface/datasets/pull/6189
6,189
Don't alter input in Features.from_dict
closed
3
2023-08-29T12:29:47
2023-08-29T13:04:59
2023-08-29T12:52:48
lhoestq
[]
null
true
1,870,987,640
https://api.github.com/repos/huggingface/datasets/issues/6188
https://github.com/huggingface/datasets/issues/6188
6,188
[Feature Request] Check the length of batch before writing so that empty batch is allowed
closed
1
2023-08-29T06:37:34
2023-09-19T21:55:38
2023-09-19T21:55:37
namespace-Pt
[]
### Use Case I use `dataset.map(process_fn, batched=True)` to process the dataset, with data **augmentations or filtering**. However, when all examples within a batch is filtered out, i.e. **an empty batch is returned**, the following error will be thrown: ``` ValueError: Schema and number of arrays unequal `...
false
1,870,936,143
https://api.github.com/repos/huggingface/datasets/issues/6187
https://github.com/huggingface/datasets/issues/6187
6,187
Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory
open
1
2023-08-29T05:49:56
2023-08-29T16:21:45
null
andysingal
[]
### Describe the bug ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) [<ipython-input-48-6a7b3e847019>](https://localhost:8080/#) in <cell line: 7>() 5 } 6 ----> 7 csv_datasets_reloaded = load_...
false
1,869,431,457
https://api.github.com/repos/huggingface/datasets/issues/6186
https://github.com/huggingface/datasets/issues/6186
6,186
Feature request: add code example of multi-GPU processing
closed
18
2023-08-28T10:00:59
2024-10-07T09:39:51
2023-11-22T15:42:20
NielsRogge
[ "documentation", "enhancement" ]
### Feature request Would be great to add a code example of how to do multi-GPU processing with 🤗 Datasets in the documentation. cc @stevhliu Currently the docs has a small [section](https://huggingface.co/docs/datasets/v2.3.2/en/process#map) on this saying "your big GPU call goes here", however it didn't work f...
false
1,868,077,748
https://api.github.com/repos/huggingface/datasets/issues/6185
https://github.com/huggingface/datasets/issues/6185
6,185
Error in saving the PIL image into *.arrow files using datasets.arrow_writer
open
1
2023-08-26T12:15:57
2023-08-29T14:49:58
null
HaozheZhao
[]
### Describe the bug I am using the ArrowWriter from datasets.arrow_writer to save a json-style file as arrow files. Within the dictionary, it contains a feature called "image" which is a list of PIL.Image objects. I am saving the json using the following script: ``` def save_to_arrow(path,temp): with ArrowWri...
false
1,867,766,143
https://api.github.com/repos/huggingface/datasets/issues/6184
https://github.com/huggingface/datasets/issues/6184
6,184
Map cache does not detect function changes in another module
closed
2
2023-08-25T22:59:14
2023-08-29T20:57:07
2023-08-29T20:56:49
jonathanasdf
[ "duplicate" ]
```python # dataset.py import os import datasets if not os.path.exists('/tmp/test.json'): with open('/tmp/test.json', 'w') as file: file.write('[{"text": "hello"}]') def transform(example): text = example['text'] # text += ' world' return {'text': text} data = datasets.load_dataset('json', ...
false
1,867,743,276
https://api.github.com/repos/huggingface/datasets/issues/6183
https://github.com/huggingface/datasets/issues/6183
6,183
Load dataset with non-existent file
closed
2
2023-08-25T22:21:22
2023-08-29T13:26:22
2023-08-29T13:26:22
freQuensy23-coder
[]
### Describe the bug When load a dataset from datasets and pass a wrong path to json with the data, error message does not contain something abount "wrong path" or "file do not exist" - ```SchemaInferenceError: Please pass `features` or at least one example when writing data``` ### Steps to reproduce the bug ...
false
1,867,203,131
https://api.github.com/repos/huggingface/datasets/issues/6182
https://github.com/huggingface/datasets/issues/6182
6,182
Loading Meteor metric in HF evaluate module crashes due to datasets import issue
closed
4
2023-08-25T14:54:06
2023-09-04T16:41:11
2023-08-31T14:38:23
dsashulya
[]
### Describe the bug When using python3.9 and ```evaluate``` module loading Meteor metric crashes at a non-existent import from ```datasets.config``` in ```datasets v2.14``` ### Steps to reproduce the bug ``` from evaluate import load meteor = load("meteor") ``` produces the following error: ``` from d...
false
1,867,035,522
https://api.github.com/repos/huggingface/datasets/issues/6181
https://github.com/huggingface/datasets/pull/6181
6,181
Fix import in `image_load` doc
closed
3
2023-08-25T13:12:19
2023-08-25T16:12:46
2023-08-25T16:02:24
mariosasko
[]
Reported on [Discord](https://discord.com/channels/879548962464493619/1144295822209581168/1144295822209581168)
true
1,867,032,578
https://api.github.com/repos/huggingface/datasets/issues/6180
https://github.com/huggingface/datasets/pull/6180
6,180
Use `hf-internal-testing` repos for hosting test dataset repos
closed
4
2023-08-25T13:10:26
2023-08-25T16:58:02
2023-08-25T16:46:22
mariosasko
[]
Use `hf-internal-testing` for hosting instead of the maintainers' dataset repos.
true
1,867,009,016
https://api.github.com/repos/huggingface/datasets/issues/6179
https://github.com/huggingface/datasets/issues/6179
6,179
Map cache with tokenizer
open
4
2023-08-25T12:55:18
2023-08-31T15:17:24
null
jonathanasdf
[]
Similar issue to https://github.com/huggingface/datasets/issues/5985, but across different sessions rather than two calls in the same session. Unlike that issue, explicitly calling tokenizer(my_args) before the map() doesn't help, because the tokenizer was created with a different hash to begin with... setup ```...
false
1,866,610,102
https://api.github.com/repos/huggingface/datasets/issues/6178
https://github.com/huggingface/datasets/issues/6178
6,178
'import datasets' throws "invalid syntax error"
closed
1
2023-08-25T08:35:14
2023-09-27T17:33:39
2023-09-27T17:33:39
elia-ashraf
[]
### Describe the bug Hi, I have been trying to import the datasets library but I keep gtting this error. `Traceback (most recent call last): File /opt/local/jupyterhub/lib64/python3.9/site-packages/IPython/core/interactiveshell.py:3508 in run_code exec(code_obj, self.user_global_ns, self.user_ns) ...
false
1,865,490,962
https://api.github.com/repos/huggingface/datasets/issues/6177
https://github.com/huggingface/datasets/pull/6177
6,177
Use object detection images from `huggingface/documentation-images`
closed
4
2023-08-24T16:16:09
2023-08-25T16:30:00
2023-08-25T16:21:17
mariosasko
[]
null
true
1,864,436,408
https://api.github.com/repos/huggingface/datasets/issues/6176
https://github.com/huggingface/datasets/issues/6176
6,176
how to limit the size of memory mapped file?
open
6
2023-08-24T05:33:45
2023-10-11T06:00:10
null
williamium3000
[]
### Describe the bug Huggingface datasets use memory-mapped file to map large datasets in memory for fast access. However, it seems like huggingface will occupy all the memory for memory-mapped files, which makes a troublesome situation since we cluster will distribute a small portion of memory to me (once it's over ...
false
1,863,592,678
https://api.github.com/repos/huggingface/datasets/issues/6175
https://github.com/huggingface/datasets/pull/6175
6,175
PyArrow 13 CI fixes
closed
3
2023-08-23T15:45:53
2023-08-25T13:15:59
2023-08-25T13:06:52
mariosasko
[]
Fixes: * bumps the PyArrow version check in the `cast_array_to_feature` to avoid the offset bug (still not fixed) * aligns the Pandas formatting tests with the Numpy ones (the current test fails due to https://github.com/apache/arrow/pull/35656, which requires `.to_pandas(coerce_temporal_nanoseconds=True)` to always ...
true
1,863,422,065
https://api.github.com/repos/huggingface/datasets/issues/6173
https://github.com/huggingface/datasets/issues/6173
6,173
Fix CI for pyarrow 13.0.0
closed
0
2023-08-23T14:11:20
2023-08-25T13:06:53
2023-08-25T13:06:53
lhoestq
[]
pyarrow 13.0.0 just came out ``` FAILED tests/test_formatting.py::ArrowExtractorTest::test_pandas_extractor - AssertionError: Attributes of Series are different Attribute "dtype" are different [left]: datetime64[us, UTC] [right]: datetime64[ns, UTC] ``` ``` FAILED tests/test_table.py::test_cast_sliced_fi...
false
1,863,318,027
https://api.github.com/repos/huggingface/datasets/issues/6172
https://github.com/huggingface/datasets/issues/6172
6,172
Make Dataset streaming queries retryable
open
4
2023-08-23T13:15:38
2023-11-06T13:54:16
null
rojagtap
[ "enhancement" ]
### Feature request Streaming datasets, as intended, do not load the entire dataset in memory or disk. However, while querying the next data chunk from the remote, sometimes it is possible that the service is down or there might be other issues that may cause the query to fail. In such a scenario, it would be nice to ...
false
1,862,922,767
https://api.github.com/repos/huggingface/datasets/issues/6171
https://github.com/huggingface/datasets/pull/6171
6,171
Fix typo in about_mapstyle_vs_iterable.mdx
closed
3
2023-08-23T09:21:11
2023-08-23T09:32:59
2023-08-23T09:21:19
lhoestq
[]
null
true
1,862,705,731
https://api.github.com/repos/huggingface/datasets/issues/6170
https://github.com/huggingface/datasets/pull/6170
6,170
feat: Return the name of the currently loaded file
open
1
2023-08-23T07:08:17
2023-08-29T12:41:05
null
Amitesh-Patel
[]
Added an optional parameter return_file_name in the load_dataset function. When it is set to True, the function will include the name of the file corresponding to the current line as a feature in the returned output. I added this here https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/js...
true
1,862,360,199
https://api.github.com/repos/huggingface/datasets/issues/6169
https://github.com/huggingface/datasets/issues/6169
6,169
Configurations in yaml not working
open
4
2023-08-23T00:13:22
2023-08-23T15:35:31
null
tsor13
[]
### Dataset configurations cannot be created in YAML/README Hello! I'm trying to follow the docs here in order to create structure in my dataset as added from here (#5331): https://github.com/huggingface/datasets/blob/8b8e6ee067eb74e7965ca2a6768f15f9398cb7c8/docs/source/repository_structure.mdx#L110-L118 I have t...
false
1,861,867,274
https://api.github.com/repos/huggingface/datasets/issues/6168
https://github.com/huggingface/datasets/pull/6168
6,168
Fix ArrayXD YAML conversion
closed
6
2023-08-22T17:02:54
2023-12-12T15:06:59
2023-12-12T15:00:43
mariosasko
[]
Replace the `shape` tuple with a list in the `ArrayXD` YAML conversion. Fix #6112
true
1,861,474,327
https://api.github.com/repos/huggingface/datasets/issues/6167
https://github.com/huggingface/datasets/pull/6167
6,167
Allow hyphen in split name
closed
5
2023-08-22T13:30:59
2024-01-11T06:31:31
2023-08-22T15:38:53
mariosasko
[]
To fix https://discuss.huggingface.co/t/error-when-setting-up-the-dataset-viewer-streamingrowserror/51276.
true